2023-07-14 17:13:22,131 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2 2023-07-14 17:13:22,146 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-14 17:13:22,167 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-14 17:13:22,167 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1, deleteOnExit=true 2023-07-14 17:13:22,168 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-14 17:13:22,169 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/test.cache.data in system properties and HBase conf 2023-07-14 17:13:22,169 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.tmp.dir in system properties and HBase conf 2023-07-14 17:13:22,170 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.log.dir in system properties and HBase conf 2023-07-14 17:13:22,170 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-14 17:13:22,170 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-14 17:13:22,171 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-14 17:13:22,310 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-14 17:13:22,818 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-14 17:13:22,823 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-14 17:13:22,824 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-14 17:13:22,825 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-14 17:13:22,825 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-14 17:13:22,825 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-14 17:13:22,826 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-14 17:13:22,826 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-14 17:13:22,827 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-14 17:13:22,827 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-14 17:13:22,827 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/nfs.dump.dir in system properties and HBase conf 2023-07-14 17:13:22,828 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/java.io.tmpdir in system properties and HBase conf 2023-07-14 17:13:22,828 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-14 17:13:22,828 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-14 17:13:22,829 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-14 17:13:23,426 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-14 17:13:23,429 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-14 17:13:23,696 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-14 17:13:23,893 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-14 17:13:23,906 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 17:13:23,939 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 17:13:23,972 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/java.io.tmpdir/Jetty_localhost_localdomain_40017_hdfs____.o9xp70/webapp 2023-07-14 17:13:24,098 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:40017 2023-07-14 17:13:24,135 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-14 17:13:24,136 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-14 17:13:24,516 WARN [Listener at localhost.localdomain/37685] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 17:13:24,587 WARN [Listener at localhost.localdomain/37685] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 17:13:24,610 WARN [Listener at localhost.localdomain/37685] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 17:13:24,618 INFO [Listener at localhost.localdomain/37685] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 17:13:24,629 INFO [Listener at localhost.localdomain/37685] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/java.io.tmpdir/Jetty_localhost_35591_datanode____.6wcyem/webapp 2023-07-14 17:13:24,740 INFO [Listener at localhost.localdomain/37685] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35591 2023-07-14 17:13:25,223 WARN [Listener at localhost.localdomain/37207] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 17:13:25,296 WARN [Listener at localhost.localdomain/37207] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 17:13:25,311 WARN [Listener at localhost.localdomain/37207] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 17:13:25,314 INFO [Listener at localhost.localdomain/37207] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 17:13:25,330 INFO [Listener at localhost.localdomain/37207] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/java.io.tmpdir/Jetty_localhost_43733_datanode____.jd7swx/webapp 2023-07-14 17:13:25,444 INFO [Listener at localhost.localdomain/37207] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43733 2023-07-14 17:13:25,481 WARN [Listener at localhost.localdomain/37409] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 17:13:25,517 WARN [Listener at localhost.localdomain/37409] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 17:13:25,522 WARN [Listener at localhost.localdomain/37409] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 17:13:25,525 INFO [Listener at localhost.localdomain/37409] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 17:13:25,543 INFO [Listener at localhost.localdomain/37409] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/java.io.tmpdir/Jetty_localhost_37361_datanode____mgpjxt/webapp 2023-07-14 17:13:25,668 INFO [Listener at localhost.localdomain/37409] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37361 2023-07-14 17:13:25,702 WARN [Listener at localhost.localdomain/41607] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 17:13:25,932 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaca3e113be93fd59: Processing first storage report for DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7 from datanode 839d2ce9-3c37-45c7-82fb-078e6b4b00f0 2023-07-14 17:13:25,933 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaca3e113be93fd59: from storage DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7 node DatanodeRegistration(127.0.0.1:34029, datanodeUuid=839d2ce9-3c37-45c7-82fb-078e6b4b00f0, infoPort=44873, infoSecurePort=0, ipcPort=41607, storageInfo=lv=-57;cid=testClusterID;nsid=605737805;c=1689354803494), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-14 17:13:25,934 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf6b91e814628842c: Processing first storage report for DS-b506ef08-1752-4b8c-8067-a73e3b0f1923 from datanode 03883fa9-9050-4c0d-923e-7589418f6294 2023-07-14 17:13:25,934 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf6b91e814628842c: from storage DS-b506ef08-1752-4b8c-8067-a73e3b0f1923 node DatanodeRegistration(127.0.0.1:33411, datanodeUuid=03883fa9-9050-4c0d-923e-7589418f6294, infoPort=36135, infoSecurePort=0, ipcPort=37207, storageInfo=lv=-57;cid=testClusterID;nsid=605737805;c=1689354803494), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:13:25,934 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbd07d99f5c06679d: Processing first storage report for DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38 from datanode 987069f3-95b8-4d3e-8d3b-02249727026b 2023-07-14 17:13:25,934 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbd07d99f5c06679d: from storage DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38 node DatanodeRegistration(127.0.0.1:39185, datanodeUuid=987069f3-95b8-4d3e-8d3b-02249727026b, infoPort=41393, infoSecurePort=0, ipcPort=37409, storageInfo=lv=-57;cid=testClusterID;nsid=605737805;c=1689354803494), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:13:25,934 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaca3e113be93fd59: Processing first storage report for DS-1df8e8fa-52e8-4433-8a2c-e926f1161f60 from datanode 839d2ce9-3c37-45c7-82fb-078e6b4b00f0 2023-07-14 17:13:25,934 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaca3e113be93fd59: from storage DS-1df8e8fa-52e8-4433-8a2c-e926f1161f60 node DatanodeRegistration(127.0.0.1:34029, datanodeUuid=839d2ce9-3c37-45c7-82fb-078e6b4b00f0, infoPort=44873, infoSecurePort=0, ipcPort=41607, storageInfo=lv=-57;cid=testClusterID;nsid=605737805;c=1689354803494), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:13:25,934 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf6b91e814628842c: Processing first storage report for DS-2fe1451e-fedc-41a3-9fef-6bf58e5362d2 from datanode 03883fa9-9050-4c0d-923e-7589418f6294 2023-07-14 17:13:25,935 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf6b91e814628842c: from storage DS-2fe1451e-fedc-41a3-9fef-6bf58e5362d2 node DatanodeRegistration(127.0.0.1:33411, datanodeUuid=03883fa9-9050-4c0d-923e-7589418f6294, infoPort=36135, infoSecurePort=0, ipcPort=37207, storageInfo=lv=-57;cid=testClusterID;nsid=605737805;c=1689354803494), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-14 17:13:25,935 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbd07d99f5c06679d: Processing first storage report for DS-3942bb9b-d8d2-4716-b7f4-d2b6fc2ce26d from datanode 987069f3-95b8-4d3e-8d3b-02249727026b 2023-07-14 17:13:25,935 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbd07d99f5c06679d: from storage DS-3942bb9b-d8d2-4716-b7f4-d2b6fc2ce26d node DatanodeRegistration(127.0.0.1:39185, datanodeUuid=987069f3-95b8-4d3e-8d3b-02249727026b, infoPort=41393, infoSecurePort=0, ipcPort=37409, storageInfo=lv=-57;cid=testClusterID;nsid=605737805;c=1689354803494), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:13:26,163 DEBUG [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2 2023-07-14 17:13:26,239 INFO [Listener at localhost.localdomain/41607] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1/zookeeper_0, clientPort=54612, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-14 17:13:26,257 INFO [Listener at localhost.localdomain/41607] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54612 2023-07-14 17:13:26,267 INFO [Listener at localhost.localdomain/41607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:26,270 INFO [Listener at localhost.localdomain/41607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:26,599 INFO [Listener at localhost.localdomain/41607] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a with version=8 2023-07-14 17:13:26,599 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/hbase-staging 2023-07-14 17:13:26,609 DEBUG [Listener at localhost.localdomain/41607] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-14 17:13:26,609 DEBUG [Listener at localhost.localdomain/41607] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-14 17:13:26,609 DEBUG [Listener at localhost.localdomain/41607] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-14 17:13:26,610 DEBUG [Listener at localhost.localdomain/41607] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-14 17:13:27,025 INFO [Listener at localhost.localdomain/41607] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-14 17:13:27,700 INFO [Listener at localhost.localdomain/41607] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:13:27,764 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:27,765 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:27,765 INFO [Listener at localhost.localdomain/41607] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:13:27,765 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:27,766 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:13:27,965 INFO [Listener at localhost.localdomain/41607] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:13:28,096 DEBUG [Listener at localhost.localdomain/41607] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-14 17:13:28,204 INFO [Listener at localhost.localdomain/41607] ipc.NettyRpcServer(120): Bind to /148.251.75.209:41281 2023-07-14 17:13:28,216 INFO [Listener at localhost.localdomain/41607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:28,218 INFO [Listener at localhost.localdomain/41607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:28,249 INFO [Listener at localhost.localdomain/41607] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41281 connecting to ZooKeeper ensemble=127.0.0.1:54612 2023-07-14 17:13:28,311 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:412810x0, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:13:28,319 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41281-0x1008c7920480000 connected 2023-07-14 17:13:28,377 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(164): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:13:28,378 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(164): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:13:28,381 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(164): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:13:28,398 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41281 2023-07-14 17:13:28,399 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41281 2023-07-14 17:13:28,399 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41281 2023-07-14 17:13:28,406 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41281 2023-07-14 17:13:28,407 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41281 2023-07-14 17:13:28,446 INFO [Listener at localhost.localdomain/41607] log.Log(170): Logging initialized @7095ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-14 17:13:28,641 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:13:28,642 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:13:28,643 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:13:28,645 INFO [Listener at localhost.localdomain/41607] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-14 17:13:28,646 INFO [Listener at localhost.localdomain/41607] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:13:28,646 INFO [Listener at localhost.localdomain/41607] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:13:28,651 INFO [Listener at localhost.localdomain/41607] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:13:28,767 INFO [Listener at localhost.localdomain/41607] http.HttpServer(1146): Jetty bound to port 39513 2023-07-14 17:13:28,772 INFO [Listener at localhost.localdomain/41607] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:13:28,815 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:28,819 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@67779d68{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:13:28,820 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:28,820 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@70894e64{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:13:28,895 INFO [Listener at localhost.localdomain/41607] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:13:28,915 INFO [Listener at localhost.localdomain/41607] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:13:28,915 INFO [Listener at localhost.localdomain/41607] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:13:28,918 INFO [Listener at localhost.localdomain/41607] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 17:13:28,929 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:28,958 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6a5048{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-14 17:13:28,973 INFO [Listener at localhost.localdomain/41607] server.AbstractConnector(333): Started ServerConnector@23da46f6{HTTP/1.1, (http/1.1)}{0.0.0.0:39513} 2023-07-14 17:13:28,973 INFO [Listener at localhost.localdomain/41607] server.Server(415): Started @7622ms 2023-07-14 17:13:28,977 INFO [Listener at localhost.localdomain/41607] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a, hbase.cluster.distributed=false 2023-07-14 17:13:29,062 INFO [Listener at localhost.localdomain/41607] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:13:29,063 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:29,063 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:29,063 INFO [Listener at localhost.localdomain/41607] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:13:29,063 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:29,064 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:13:29,071 INFO [Listener at localhost.localdomain/41607] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:13:29,075 INFO [Listener at localhost.localdomain/41607] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44093 2023-07-14 17:13:29,079 INFO [Listener at localhost.localdomain/41607] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 17:13:29,091 DEBUG [Listener at localhost.localdomain/41607] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 17:13:29,092 INFO [Listener at localhost.localdomain/41607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:29,095 INFO [Listener at localhost.localdomain/41607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:29,098 INFO [Listener at localhost.localdomain/41607] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44093 connecting to ZooKeeper ensemble=127.0.0.1:54612 2023-07-14 17:13:29,113 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:440930x0, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:13:29,114 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(164): regionserver:440930x0, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:13:29,117 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(164): regionserver:440930x0, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:13:29,118 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(164): regionserver:440930x0, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:13:29,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44093-0x1008c7920480001 connected 2023-07-14 17:13:29,123 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44093 2023-07-14 17:13:29,126 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44093 2023-07-14 17:13:29,130 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44093 2023-07-14 17:13:29,135 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44093 2023-07-14 17:13:29,136 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44093 2023-07-14 17:13:29,140 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:13:29,140 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:13:29,141 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:13:29,142 INFO [Listener at localhost.localdomain/41607] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 17:13:29,143 INFO [Listener at localhost.localdomain/41607] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:13:29,143 INFO [Listener at localhost.localdomain/41607] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:13:29,143 INFO [Listener at localhost.localdomain/41607] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:13:29,146 INFO [Listener at localhost.localdomain/41607] http.HttpServer(1146): Jetty bound to port 39375 2023-07-14 17:13:29,146 INFO [Listener at localhost.localdomain/41607] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:13:29,157 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:29,158 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@57dd3d48{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:13:29,159 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:29,159 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4c0b26d3{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:13:29,176 INFO [Listener at localhost.localdomain/41607] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:13:29,178 INFO [Listener at localhost.localdomain/41607] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:13:29,178 INFO [Listener at localhost.localdomain/41607] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:13:29,179 INFO [Listener at localhost.localdomain/41607] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 17:13:29,180 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:29,185 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@d602e46{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:13:29,190 INFO [Listener at localhost.localdomain/41607] server.AbstractConnector(333): Started ServerConnector@688f1242{HTTP/1.1, (http/1.1)}{0.0.0.0:39375} 2023-07-14 17:13:29,190 INFO [Listener at localhost.localdomain/41607] server.Server(415): Started @7839ms 2023-07-14 17:13:29,222 INFO [Listener at localhost.localdomain/41607] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:13:29,222 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:29,222 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:29,223 INFO [Listener at localhost.localdomain/41607] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:13:29,224 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:29,224 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:13:29,224 INFO [Listener at localhost.localdomain/41607] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:13:29,226 INFO [Listener at localhost.localdomain/41607] ipc.NettyRpcServer(120): Bind to /148.251.75.209:42361 2023-07-14 17:13:29,227 INFO [Listener at localhost.localdomain/41607] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 17:13:29,234 DEBUG [Listener at localhost.localdomain/41607] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 17:13:29,235 INFO [Listener at localhost.localdomain/41607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:29,237 INFO [Listener at localhost.localdomain/41607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:29,238 INFO [Listener at localhost.localdomain/41607] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42361 connecting to ZooKeeper ensemble=127.0.0.1:54612 2023-07-14 17:13:29,243 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:423610x0, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:13:29,245 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(164): regionserver:423610x0, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:13:29,245 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(164): regionserver:423610x0, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:13:29,246 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(164): regionserver:423610x0, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:13:29,247 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42361-0x1008c7920480002 connected 2023-07-14 17:13:29,250 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42361 2023-07-14 17:13:29,251 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42361 2023-07-14 17:13:29,253 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42361 2023-07-14 17:13:29,256 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42361 2023-07-14 17:13:29,257 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42361 2023-07-14 17:13:29,260 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:13:29,261 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:13:29,261 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:13:29,262 INFO [Listener at localhost.localdomain/41607] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 17:13:29,262 INFO [Listener at localhost.localdomain/41607] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:13:29,263 INFO [Listener at localhost.localdomain/41607] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:13:29,263 INFO [Listener at localhost.localdomain/41607] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:13:29,264 INFO [Listener at localhost.localdomain/41607] http.HttpServer(1146): Jetty bound to port 42787 2023-07-14 17:13:29,264 INFO [Listener at localhost.localdomain/41607] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:13:29,271 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:29,271 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@130df82f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:13:29,272 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:29,272 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7125c9f8{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:13:29,281 INFO [Listener at localhost.localdomain/41607] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:13:29,282 INFO [Listener at localhost.localdomain/41607] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:13:29,283 INFO [Listener at localhost.localdomain/41607] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:13:29,283 INFO [Listener at localhost.localdomain/41607] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 17:13:29,286 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:29,287 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5b72dc05{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:13:29,288 INFO [Listener at localhost.localdomain/41607] server.AbstractConnector(333): Started ServerConnector@66040974{HTTP/1.1, (http/1.1)}{0.0.0.0:42787} 2023-07-14 17:13:29,289 INFO [Listener at localhost.localdomain/41607] server.Server(415): Started @7938ms 2023-07-14 17:13:29,304 INFO [Listener at localhost.localdomain/41607] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:13:29,304 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:29,305 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:29,305 INFO [Listener at localhost.localdomain/41607] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:13:29,305 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:29,305 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:13:29,305 INFO [Listener at localhost.localdomain/41607] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:13:29,308 INFO [Listener at localhost.localdomain/41607] ipc.NettyRpcServer(120): Bind to /148.251.75.209:46457 2023-07-14 17:13:29,308 INFO [Listener at localhost.localdomain/41607] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 17:13:29,314 DEBUG [Listener at localhost.localdomain/41607] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 17:13:29,316 INFO [Listener at localhost.localdomain/41607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:29,319 INFO [Listener at localhost.localdomain/41607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:29,321 INFO [Listener at localhost.localdomain/41607] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46457 connecting to ZooKeeper ensemble=127.0.0.1:54612 2023-07-14 17:13:29,325 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:464570x0, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:13:29,327 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46457-0x1008c7920480003 connected 2023-07-14 17:13:29,327 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(164): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:13:29,329 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(164): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:13:29,330 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(164): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:13:29,333 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46457 2023-07-14 17:13:29,334 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46457 2023-07-14 17:13:29,336 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46457 2023-07-14 17:13:29,341 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46457 2023-07-14 17:13:29,341 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46457 2023-07-14 17:13:29,344 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:13:29,345 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:13:29,345 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:13:29,346 INFO [Listener at localhost.localdomain/41607] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 17:13:29,346 INFO [Listener at localhost.localdomain/41607] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:13:29,346 INFO [Listener at localhost.localdomain/41607] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:13:29,347 INFO [Listener at localhost.localdomain/41607] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:13:29,348 INFO [Listener at localhost.localdomain/41607] http.HttpServer(1146): Jetty bound to port 33783 2023-07-14 17:13:29,348 INFO [Listener at localhost.localdomain/41607] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:13:29,356 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:29,356 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@22b17312{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:13:29,357 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:29,357 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@73a2030a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:13:29,367 INFO [Listener at localhost.localdomain/41607] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:13:29,368 INFO [Listener at localhost.localdomain/41607] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:13:29,369 INFO [Listener at localhost.localdomain/41607] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:13:29,369 INFO [Listener at localhost.localdomain/41607] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 17:13:29,371 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:29,373 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@66be370b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:13:29,374 INFO [Listener at localhost.localdomain/41607] server.AbstractConnector(333): Started ServerConnector@5699ce09{HTTP/1.1, (http/1.1)}{0.0.0.0:33783} 2023-07-14 17:13:29,374 INFO [Listener at localhost.localdomain/41607] server.Server(415): Started @8023ms 2023-07-14 17:13:29,381 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:13:29,390 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@38eb127{HTTP/1.1, (http/1.1)}{0.0.0.0:46599} 2023-07-14 17:13:29,391 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(415): Started @8040ms 2023-07-14 17:13:29,391 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,41281,1689354806808 2023-07-14 17:13:29,405 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-14 17:13:29,406 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,41281,1689354806808 2023-07-14 17:13:29,425 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 17:13:29,425 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 17:13:29,425 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 17:13:29,425 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 17:13:29,426 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:13:29,427 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 17:13:29,428 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,41281,1689354806808 from backup master directory 2023-07-14 17:13:29,429 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 17:13:29,434 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,41281,1689354806808 2023-07-14 17:13:29,434 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-14 17:13:29,435 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:13:29,435 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,41281,1689354806808 2023-07-14 17:13:29,439 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-14 17:13:29,440 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-14 17:13:29,580 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/hbase.id with ID: 541b1292-07c3-43b8-bf41-59fb9df0a64c 2023-07-14 17:13:29,646 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:29,666 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:13:29,746 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6660e458 to 127.0.0.1:54612 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:13:29,787 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@757ae7f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:13:29,834 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:13:29,836 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-14 17:13:29,863 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-14 17:13:29,863 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-14 17:13:29,865 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-14 17:13:29,871 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-14 17:13:29,873 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:13:29,924 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/MasterData/data/master/store-tmp 2023-07-14 17:13:29,988 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:29,988 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-14 17:13:29,988 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:13:29,989 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:13:29,989 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-14 17:13:29,989 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:13:29,989 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:13:29,989 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 17:13:29,991 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/MasterData/WALs/jenkins-hbase20.apache.org,41281,1689354806808 2023-07-14 17:13:30,033 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C41281%2C1689354806808, suffix=, logDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/MasterData/WALs/jenkins-hbase20.apache.org,41281,1689354806808, archiveDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/MasterData/oldWALs, maxLogs=10 2023-07-14 17:13:30,132 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK] 2023-07-14 17:13:30,132 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK] 2023-07-14 17:13:30,132 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK] 2023-07-14 17:13:30,152 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-14 17:13:30,241 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/MasterData/WALs/jenkins-hbase20.apache.org,41281,1689354806808/jenkins-hbase20.apache.org%2C41281%2C1689354806808.1689354810048 2023-07-14 17:13:30,246 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK], DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK], DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK]] 2023-07-14 17:13:30,247 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:30,248 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:30,251 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:13:30,253 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:13:30,382 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:13:30,395 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-14 17:13:30,435 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-14 17:13:30,454 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:30,460 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:13:30,463 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:13:30,489 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:13:30,497 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:30,499 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10403530560, jitterRate=-0.0310957133769989}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:30,499 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 17:13:30,501 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-14 17:13:30,537 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-14 17:13:30,538 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-14 17:13:30,543 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-14 17:13:30,546 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-14 17:13:30,606 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 59 msec 2023-07-14 17:13:30,606 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-14 17:13:30,652 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-14 17:13:30,666 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-14 17:13:30,677 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-14 17:13:30,685 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-14 17:13:30,693 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-14 17:13:30,703 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:13:30,704 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-14 17:13:30,705 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-14 17:13:30,725 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-14 17:13:30,732 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 17:13:30,732 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 17:13:30,733 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 17:13:30,733 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 17:13:30,733 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:13:30,740 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,41281,1689354806808, sessionid=0x1008c7920480000, setting cluster-up flag (Was=false) 2023-07-14 17:13:30,762 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:13:30,781 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-14 17:13:30,791 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,41281,1689354806808 2023-07-14 17:13:30,796 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:13:30,801 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-14 17:13:30,802 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,41281,1689354806808 2023-07-14 17:13:30,805 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.hbase-snapshot/.tmp 2023-07-14 17:13:30,882 INFO [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(951): ClusterId : 541b1292-07c3-43b8-bf41-59fb9df0a64c 2023-07-14 17:13:30,883 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(951): ClusterId : 541b1292-07c3-43b8-bf41-59fb9df0a64c 2023-07-14 17:13:30,884 INFO [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(951): ClusterId : 541b1292-07c3-43b8-bf41-59fb9df0a64c 2023-07-14 17:13:30,889 DEBUG [RS:2;jenkins-hbase20:46457] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 17:13:30,889 DEBUG [RS:0;jenkins-hbase20:44093] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 17:13:30,889 DEBUG [RS:1;jenkins-hbase20:42361] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 17:13:30,900 DEBUG [RS:2;jenkins-hbase20:46457] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 17:13:30,900 DEBUG [RS:2;jenkins-hbase20:46457] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 17:13:30,901 DEBUG [RS:1;jenkins-hbase20:42361] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 17:13:30,901 DEBUG [RS:1;jenkins-hbase20:42361] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 17:13:30,903 DEBUG [RS:0;jenkins-hbase20:44093] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 17:13:30,903 DEBUG [RS:0;jenkins-hbase20:44093] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 17:13:30,905 DEBUG [RS:1;jenkins-hbase20:42361] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 17:13:30,907 DEBUG [RS:1;jenkins-hbase20:42361] zookeeper.ReadOnlyZKClient(139): Connect 0x52881952 to 127.0.0.1:54612 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:13:30,917 DEBUG [RS:0;jenkins-hbase20:44093] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 17:13:30,918 DEBUG [RS:2;jenkins-hbase20:46457] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 17:13:30,919 DEBUG [RS:0;jenkins-hbase20:44093] zookeeper.ReadOnlyZKClient(139): Connect 0x00ccb61f to 127.0.0.1:54612 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:13:30,926 DEBUG [RS:2;jenkins-hbase20:46457] zookeeper.ReadOnlyZKClient(139): Connect 0x5e1da330 to 127.0.0.1:54612 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:13:30,927 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-14 17:13:30,941 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-14 17:13:30,963 DEBUG [RS:1;jenkins-hbase20:42361] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c4ffd73, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:13:30,964 DEBUG [RS:1;jenkins-hbase20:42361] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@58707222, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:13:30,973 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-14 17:13:30,974 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 17:13:30,976 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-14 17:13:31,003 DEBUG [RS:0;jenkins-hbase20:44093] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e0b4876, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:13:31,003 DEBUG [RS:0;jenkins-hbase20:44093] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a34dd6f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:13:31,017 DEBUG [RS:1;jenkins-hbase20:42361] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:42361 2023-07-14 17:13:31,029 INFO [RS:1;jenkins-hbase20:42361] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 17:13:31,030 INFO [RS:1;jenkins-hbase20:42361] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 17:13:31,030 DEBUG [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 17:13:31,035 DEBUG [RS:2;jenkins-hbase20:46457] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@50da4683, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:13:31,036 DEBUG [RS:2;jenkins-hbase20:46457] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@11e74cb8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:13:31,037 INFO [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41281,1689354806808 with isa=jenkins-hbase20.apache.org/148.251.75.209:42361, startcode=1689354809221 2023-07-14 17:13:31,041 DEBUG [RS:0;jenkins-hbase20:44093] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:44093 2023-07-14 17:13:31,046 INFO [RS:0;jenkins-hbase20:44093] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 17:13:31,048 INFO [RS:0;jenkins-hbase20:44093] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 17:13:31,048 DEBUG [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 17:13:31,051 INFO [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41281,1689354806808 with isa=jenkins-hbase20.apache.org/148.251.75.209:44093, startcode=1689354809062 2023-07-14 17:13:31,055 DEBUG [RS:2;jenkins-hbase20:46457] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase20:46457 2023-07-14 17:13:31,055 INFO [RS:2;jenkins-hbase20:46457] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 17:13:31,055 INFO [RS:2;jenkins-hbase20:46457] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 17:13:31,055 DEBUG [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 17:13:31,057 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41281,1689354806808 with isa=jenkins-hbase20.apache.org/148.251.75.209:46457, startcode=1689354809303 2023-07-14 17:13:31,080 DEBUG [RS:2;jenkins-hbase20:46457] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 17:13:31,086 DEBUG [RS:1;jenkins-hbase20:42361] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 17:13:31,087 DEBUG [RS:0;jenkins-hbase20:44093] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 17:13:31,126 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-14 17:13:31,200 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52325, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 17:13:31,200 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55205, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 17:13:31,205 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:56293, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 17:13:31,220 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:31,231 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:31,234 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:31,243 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-14 17:13:31,250 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-14 17:13:31,251 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-14 17:13:31,251 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-14 17:13:31,253 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-14 17:13:31,253 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-14 17:13:31,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-14 17:13:31,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-14 17:13:31,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-07-14 17:13:31,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:13:31,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,265 DEBUG [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(2830): Master is not running yet 2023-07-14 17:13:31,277 DEBUG [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(2830): Master is not running yet 2023-07-14 17:13:31,279 WARN [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-14 17:13:31,266 DEBUG [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(2830): Master is not running yet 2023-07-14 17:13:31,279 WARN [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-14 17:13:31,279 WARN [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-14 17:13:31,289 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689354841289 2023-07-14 17:13:31,290 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-14 17:13:31,291 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-14 17:13:31,292 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-14 17:13:31,293 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-14 17:13:31,296 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-14 17:13:31,312 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-14 17:13:31,313 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-14 17:13:31,313 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-14 17:13:31,314 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-14 17:13:31,314 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,322 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-14 17:13:31,325 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-14 17:13:31,326 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-14 17:13:31,330 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-14 17:13:31,331 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-14 17:13:31,338 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689354811338,5,FailOnTimeoutGroup] 2023-07-14 17:13:31,341 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689354811339,5,FailOnTimeoutGroup] 2023-07-14 17:13:31,341 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,341 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-14 17:13:31,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,380 INFO [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41281,1689354806808 with isa=jenkins-hbase20.apache.org/148.251.75.209:42361, startcode=1689354809221 2023-07-14 17:13:31,381 INFO [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41281,1689354806808 with isa=jenkins-hbase20.apache.org/148.251.75.209:44093, startcode=1689354809062 2023-07-14 17:13:31,380 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41281,1689354806808 with isa=jenkins-hbase20.apache.org/148.251.75.209:46457, startcode=1689354809303 2023-07-14 17:13:31,407 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41281] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:31,408 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 17:13:31,414 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41281] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:31,416 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41281] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:31,421 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-14 17:13:31,421 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 17:13:31,421 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-14 17:13:31,430 DEBUG [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a 2023-07-14 17:13:31,430 DEBUG [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a 2023-07-14 17:13:31,430 DEBUG [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a 2023-07-14 17:13:31,431 DEBUG [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37685 2023-07-14 17:13:31,431 DEBUG [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37685 2023-07-14 17:13:31,431 DEBUG [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37685 2023-07-14 17:13:31,431 DEBUG [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39513 2023-07-14 17:13:31,431 DEBUG [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39513 2023-07-14 17:13:31,431 DEBUG [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39513 2023-07-14 17:13:31,440 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:13:31,441 DEBUG [RS:0;jenkins-hbase20:44093] zookeeper.ZKUtil(162): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:31,441 DEBUG [RS:1;jenkins-hbase20:42361] zookeeper.ZKUtil(162): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:31,441 DEBUG [RS:2;jenkins-hbase20:46457] zookeeper.ZKUtil(162): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:31,442 WARN [RS:1;jenkins-hbase20:42361] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:13:31,453 INFO [RS:1;jenkins-hbase20:42361] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:13:31,453 DEBUG [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:31,442 WARN [RS:0;jenkins-hbase20:44093] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:13:31,442 WARN [RS:2;jenkins-hbase20:46457] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:13:31,454 INFO [RS:0;jenkins-hbase20:44093] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:13:31,459 DEBUG [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:31,454 INFO [RS:2;jenkins-hbase20:46457] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:13:31,487 DEBUG [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:31,487 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,46457,1689354809303] 2023-07-14 17:13:31,487 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,44093,1689354809062] 2023-07-14 17:13:31,487 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,42361,1689354809221] 2023-07-14 17:13:31,522 DEBUG [RS:1;jenkins-hbase20:42361] zookeeper.ZKUtil(162): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:31,522 DEBUG [RS:2;jenkins-hbase20:46457] zookeeper.ZKUtil(162): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:31,523 DEBUG [RS:2;jenkins-hbase20:46457] zookeeper.ZKUtil(162): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:31,523 DEBUG [RS:1;jenkins-hbase20:42361] zookeeper.ZKUtil(162): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:31,524 DEBUG [RS:1;jenkins-hbase20:42361] zookeeper.ZKUtil(162): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:31,525 DEBUG [RS:2;jenkins-hbase20:46457] zookeeper.ZKUtil(162): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:31,526 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-14 17:13:31,527 DEBUG [RS:0;jenkins-hbase20:44093] zookeeper.ZKUtil(162): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:31,528 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-14 17:13:31,528 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a 2023-07-14 17:13:31,529 DEBUG [RS:0;jenkins-hbase20:44093] zookeeper.ZKUtil(162): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:31,530 DEBUG [RS:0;jenkins-hbase20:44093] zookeeper.ZKUtil(162): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:31,547 DEBUG [RS:2;jenkins-hbase20:46457] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 17:13:31,547 DEBUG [RS:1;jenkins-hbase20:42361] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 17:13:31,548 DEBUG [RS:0;jenkins-hbase20:44093] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 17:13:31,569 INFO [RS:0;jenkins-hbase20:44093] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 17:13:31,569 INFO [RS:1;jenkins-hbase20:42361] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 17:13:31,570 INFO [RS:2;jenkins-hbase20:46457] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 17:13:31,609 INFO [RS:1;jenkins-hbase20:42361] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 17:13:31,609 INFO [RS:0;jenkins-hbase20:44093] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 17:13:31,609 INFO [RS:2;jenkins-hbase20:46457] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 17:13:31,625 INFO [RS:1;jenkins-hbase20:42361] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 17:13:31,626 INFO [RS:1;jenkins-hbase20:42361] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,625 INFO [RS:0;jenkins-hbase20:44093] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 17:13:31,626 INFO [RS:2;jenkins-hbase20:46457] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 17:13:31,626 INFO [RS:0;jenkins-hbase20:44093] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,626 INFO [RS:2;jenkins-hbase20:46457] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,633 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:31,631 INFO [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 17:13:31,641 INFO [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 17:13:31,649 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 17:13:31,649 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 17:13:31,658 INFO [RS:0;jenkins-hbase20:44093] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,658 INFO [RS:1;jenkins-hbase20:42361] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,659 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info 2023-07-14 17:13:31,659 DEBUG [RS:0;jenkins-hbase20:44093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,661 DEBUG [RS:0;jenkins-hbase20:44093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,661 DEBUG [RS:0;jenkins-hbase20:44093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,661 DEBUG [RS:0;jenkins-hbase20:44093] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,660 INFO [RS:2;jenkins-hbase20:46457] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,660 DEBUG [RS:1;jenkins-hbase20:42361] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,662 DEBUG [RS:2;jenkins-hbase20:46457] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,661 DEBUG [RS:0;jenkins-hbase20:44093] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,662 DEBUG [RS:2;jenkins-hbase20:46457] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,662 DEBUG [RS:0;jenkins-hbase20:44093] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:13:31,662 DEBUG [RS:2;jenkins-hbase20:46457] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,662 DEBUG [RS:0;jenkins-hbase20:44093] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,662 DEBUG [RS:2;jenkins-hbase20:46457] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,662 DEBUG [RS:0;jenkins-hbase20:44093] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,662 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 17:13:31,662 DEBUG [RS:1;jenkins-hbase20:42361] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,662 DEBUG [RS:1;jenkins-hbase20:42361] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,662 DEBUG [RS:1;jenkins-hbase20:42361] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,663 DEBUG [RS:1;jenkins-hbase20:42361] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,663 DEBUG [RS:1;jenkins-hbase20:42361] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:13:31,663 DEBUG [RS:1;jenkins-hbase20:42361] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,663 DEBUG [RS:1;jenkins-hbase20:42361] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,663 DEBUG [RS:1;jenkins-hbase20:42361] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,663 DEBUG [RS:1;jenkins-hbase20:42361] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,662 DEBUG [RS:0;jenkins-hbase20:44093] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,662 DEBUG [RS:2;jenkins-hbase20:46457] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,671 DEBUG [RS:0;jenkins-hbase20:44093] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,672 DEBUG [RS:2;jenkins-hbase20:46457] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:13:31,664 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:31,672 DEBUG [RS:2;jenkins-hbase20:46457] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,672 DEBUG [RS:2;jenkins-hbase20:46457] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,672 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 17:13:31,672 DEBUG [RS:2;jenkins-hbase20:46457] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,672 DEBUG [RS:2;jenkins-hbase20:46457] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:31,676 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/rep_barrier 2023-07-14 17:13:31,676 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 17:13:31,678 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:31,678 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 17:13:31,681 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table 2023-07-14 17:13:31,681 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 17:13:31,682 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:31,687 INFO [RS:0;jenkins-hbase20:44093] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,690 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740 2023-07-14 17:13:31,699 INFO [RS:0;jenkins-hbase20:44093] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,699 INFO [RS:0;jenkins-hbase20:44093] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,700 INFO [RS:1;jenkins-hbase20:42361] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,700 INFO [RS:1;jenkins-hbase20:42361] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,700 INFO [RS:1;jenkins-hbase20:42361] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,700 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740 2023-07-14 17:13:31,705 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 17:13:31,708 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 17:13:31,717 INFO [RS:1;jenkins-hbase20:42361] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 17:13:31,719 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:31,720 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11564904800, jitterRate=0.07706569135189056}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 17:13:31,720 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 17:13:31,721 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 17:13:31,721 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 17:13:31,721 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 17:13:31,721 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 17:13:31,721 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 17:13:31,722 INFO [RS:0;jenkins-hbase20:44093] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 17:13:31,727 INFO [RS:1;jenkins-hbase20:42361] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,42361,1689354809221-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,727 INFO [RS:0;jenkins-hbase20:44093] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44093,1689354809062-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,727 INFO [RS:2;jenkins-hbase20:46457] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,728 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 17:13:31,728 INFO [RS:2;jenkins-hbase20:46457] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,728 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 17:13:31,728 INFO [RS:2;jenkins-hbase20:46457] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,737 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-14 17:13:31,737 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-14 17:13:31,752 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-14 17:13:31,752 INFO [RS:2;jenkins-hbase20:46457] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 17:13:31,752 INFO [RS:2;jenkins-hbase20:46457] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46457,1689354809303-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:31,762 INFO [RS:0;jenkins-hbase20:44093] regionserver.Replication(203): jenkins-hbase20.apache.org,44093,1689354809062 started 2023-07-14 17:13:31,763 INFO [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,44093,1689354809062, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:44093, sessionid=0x1008c7920480001 2023-07-14 17:13:31,763 INFO [RS:1;jenkins-hbase20:42361] regionserver.Replication(203): jenkins-hbase20.apache.org,42361,1689354809221 started 2023-07-14 17:13:31,763 INFO [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,42361,1689354809221, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:42361, sessionid=0x1008c7920480002 2023-07-14 17:13:31,763 DEBUG [RS:1;jenkins-hbase20:42361] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 17:13:31,763 DEBUG [RS:0;jenkins-hbase20:44093] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 17:13:31,763 DEBUG [RS:1;jenkins-hbase20:42361] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:31,763 DEBUG [RS:0;jenkins-hbase20:44093] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:31,764 DEBUG [RS:1;jenkins-hbase20:42361] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,42361,1689354809221' 2023-07-14 17:13:31,764 DEBUG [RS:1;jenkins-hbase20:42361] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 17:13:31,764 DEBUG [RS:0;jenkins-hbase20:44093] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44093,1689354809062' 2023-07-14 17:13:31,767 DEBUG [RS:0;jenkins-hbase20:44093] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 17:13:31,770 INFO [RS:2;jenkins-hbase20:46457] regionserver.Replication(203): jenkins-hbase20.apache.org,46457,1689354809303 started 2023-07-14 17:13:31,770 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,46457,1689354809303, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:46457, sessionid=0x1008c7920480003 2023-07-14 17:13:31,771 DEBUG [RS:2;jenkins-hbase20:46457] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 17:13:31,771 DEBUG [RS:2;jenkins-hbase20:46457] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:31,771 DEBUG [RS:2;jenkins-hbase20:46457] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,46457,1689354809303' 2023-07-14 17:13:31,771 DEBUG [RS:2;jenkins-hbase20:46457] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 17:13:31,772 DEBUG [RS:0;jenkins-hbase20:44093] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 17:13:31,772 DEBUG [RS:1;jenkins-hbase20:42361] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 17:13:31,772 DEBUG [RS:0;jenkins-hbase20:44093] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 17:13:31,772 DEBUG [RS:0;jenkins-hbase20:44093] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 17:13:31,772 DEBUG [RS:0;jenkins-hbase20:44093] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:31,773 DEBUG [RS:0;jenkins-hbase20:44093] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44093,1689354809062' 2023-07-14 17:13:31,773 DEBUG [RS:0;jenkins-hbase20:44093] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 17:13:31,773 DEBUG [RS:0;jenkins-hbase20:44093] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 17:13:31,774 DEBUG [RS:1;jenkins-hbase20:42361] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 17:13:31,774 DEBUG [RS:1;jenkins-hbase20:42361] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 17:13:31,774 DEBUG [RS:2;jenkins-hbase20:46457] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 17:13:31,774 DEBUG [RS:1;jenkins-hbase20:42361] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:31,774 DEBUG [RS:0;jenkins-hbase20:44093] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 17:13:31,775 INFO [RS:0;jenkins-hbase20:44093] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 17:13:31,775 INFO [RS:0;jenkins-hbase20:44093] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 17:13:31,775 DEBUG [RS:1;jenkins-hbase20:42361] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,42361,1689354809221' 2023-07-14 17:13:31,775 DEBUG [RS:1;jenkins-hbase20:42361] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 17:13:31,775 DEBUG [RS:2;jenkins-hbase20:46457] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 17:13:31,775 DEBUG [RS:2;jenkins-hbase20:46457] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 17:13:31,775 DEBUG [RS:1;jenkins-hbase20:42361] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 17:13:31,776 DEBUG [RS:2;jenkins-hbase20:46457] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:31,776 DEBUG [RS:2;jenkins-hbase20:46457] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,46457,1689354809303' 2023-07-14 17:13:31,776 DEBUG [RS:2;jenkins-hbase20:46457] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 17:13:31,776 DEBUG [RS:2;jenkins-hbase20:46457] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 17:13:31,777 DEBUG [RS:1;jenkins-hbase20:42361] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 17:13:31,777 INFO [RS:1;jenkins-hbase20:42361] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 17:13:31,778 INFO [RS:1;jenkins-hbase20:42361] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 17:13:31,786 DEBUG [RS:2;jenkins-hbase20:46457] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 17:13:31,787 INFO [RS:2;jenkins-hbase20:46457] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 17:13:31,787 INFO [RS:2;jenkins-hbase20:46457] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 17:13:31,788 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-14 17:13:31,799 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-14 17:13:31,894 INFO [RS:2;jenkins-hbase20:46457] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46457%2C1689354809303, suffix=, logDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,46457,1689354809303, archiveDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs, maxLogs=32 2023-07-14 17:13:31,898 INFO [RS:0;jenkins-hbase20:44093] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44093%2C1689354809062, suffix=, logDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,44093,1689354809062, archiveDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs, maxLogs=32 2023-07-14 17:13:31,899 INFO [RS:1;jenkins-hbase20:42361] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C42361%2C1689354809221, suffix=, logDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,42361,1689354809221, archiveDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs, maxLogs=32 2023-07-14 17:13:31,958 DEBUG [jenkins-hbase20:41281] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-14 17:13:31,972 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK] 2023-07-14 17:13:31,973 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK] 2023-07-14 17:13:31,975 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK] 2023-07-14 17:13:31,974 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK] 2023-07-14 17:13:31,987 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK] 2023-07-14 17:13:31,987 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK] 2023-07-14 17:13:31,988 DEBUG [jenkins-hbase20:41281] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:31,989 DEBUG [jenkins-hbase20:41281] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:31,990 DEBUG [jenkins-hbase20:41281] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:31,990 DEBUG [jenkins-hbase20:41281] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:13:31,990 DEBUG [jenkins-hbase20:41281] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:32,000 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK] 2023-07-14 17:13:32,000 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK] 2023-07-14 17:13:32,001 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK] 2023-07-14 17:13:32,028 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,42361,1689354809221, state=OPENING 2023-07-14 17:13:32,035 INFO [RS:2;jenkins-hbase20:46457] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,46457,1689354809303/jenkins-hbase20.apache.org%2C46457%2C1689354809303.1689354811900 2023-07-14 17:13:32,036 DEBUG [RS:2;jenkins-hbase20:46457] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK], DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK], DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK]] 2023-07-14 17:13:32,039 INFO [RS:1;jenkins-hbase20:42361] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,42361,1689354809221/jenkins-hbase20.apache.org%2C42361%2C1689354809221.1689354811901 2023-07-14 17:13:32,040 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-14 17:13:32,041 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:13:32,042 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 17:13:32,042 DEBUG [RS:1;jenkins-hbase20:42361] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK], DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK], DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK]] 2023-07-14 17:13:32,043 INFO [RS:0;jenkins-hbase20:44093] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,44093,1689354809062/jenkins-hbase20.apache.org%2C44093%2C1689354809062.1689354811900 2023-07-14 17:13:32,045 DEBUG [RS:0;jenkins-hbase20:44093] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK], DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK], DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK]] 2023-07-14 17:13:32,046 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:32,230 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:32,232 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 17:13:32,235 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:49516, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 17:13:32,246 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-14 17:13:32,246 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:13:32,250 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C42361%2C1689354809221.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,42361,1689354809221, archiveDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs, maxLogs=32 2023-07-14 17:13:32,275 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK] 2023-07-14 17:13:32,275 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK] 2023-07-14 17:13:32,277 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK] 2023-07-14 17:13:32,284 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,42361,1689354809221/jenkins-hbase20.apache.org%2C42361%2C1689354809221.meta.1689354812251.meta 2023-07-14 17:13:32,285 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK], DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK], DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK]] 2023-07-14 17:13:32,285 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:32,287 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 17:13:32,290 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-14 17:13:32,292 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-14 17:13:32,297 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-14 17:13:32,298 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:32,298 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-14 17:13:32,298 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-14 17:13:32,301 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 17:13:32,302 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info 2023-07-14 17:13:32,302 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info 2023-07-14 17:13:32,303 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 17:13:32,304 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:32,305 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 17:13:32,306 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/rep_barrier 2023-07-14 17:13:32,306 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/rep_barrier 2023-07-14 17:13:32,307 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 17:13:32,307 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:32,308 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 17:13:32,309 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table 2023-07-14 17:13:32,309 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table 2023-07-14 17:13:32,310 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 17:13:32,310 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:32,312 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740 2023-07-14 17:13:32,314 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740 2023-07-14 17:13:32,317 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 17:13:32,319 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 17:13:32,321 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10273101760, jitterRate=-0.043242841958999634}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 17:13:32,321 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 17:13:32,329 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689354812223 2023-07-14 17:13:32,346 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-14 17:13:32,347 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-14 17:13:32,347 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,42361,1689354809221, state=OPEN 2023-07-14 17:13:32,350 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 17:13:32,350 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 17:13:32,354 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-14 17:13:32,355 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,42361,1689354809221 in 304 msec 2023-07-14 17:13:32,360 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-14 17:13:32,360 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 605 msec 2023-07-14 17:13:32,367 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.3790 sec 2023-07-14 17:13:32,367 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689354812367, completionTime=-1 2023-07-14 17:13:32,368 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-14 17:13:32,368 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-14 17:13:32,396 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41281,1689354806808] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:13:32,399 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:49530, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:13:32,419 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41281,1689354806808] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:13:32,429 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41281,1689354806808] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-14 17:13:32,429 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-14 17:13:32,430 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-14 17:13:32,430 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689354872430 2023-07-14 17:13:32,431 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689354932431 2023-07-14 17:13:32,431 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 62 msec 2023-07-14 17:13:32,453 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41281,1689354806808-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:32,454 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41281,1689354806808-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:32,454 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41281,1689354806808-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:32,454 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:13:32,455 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:41281, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:32,456 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:32,458 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:13:32,466 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-14 17:13:32,472 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/hbase/rsgroup/f9434bc3110cf1c29610cbaaa78c2a02 2023-07-14 17:13:32,476 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/hbase/rsgroup/f9434bc3110cf1c29610cbaaa78c2a02 empty. 2023-07-14 17:13:32,476 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/hbase/rsgroup/f9434bc3110cf1c29610cbaaa78c2a02 2023-07-14 17:13:32,477 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-14 17:13:32,486 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-14 17:13:32,486 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-14 17:13:32,489 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-14 17:13:32,493 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:13:32,496 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:13:32,511 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/hbase/namespace/773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:32,516 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/hbase/namespace/773f58cde6eff004015f5064f08a8726 empty. 2023-07-14 17:13:32,519 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/hbase/namespace/773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:32,519 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-14 17:13:32,522 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-14 17:13:32,525 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => f9434bc3110cf1c29610cbaaa78c2a02, NAME => 'hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:32,553 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:32,554 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing f9434bc3110cf1c29610cbaaa78c2a02, disabling compactions & flushes 2023-07-14 17:13:32,554 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. 2023-07-14 17:13:32,554 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. 2023-07-14 17:13:32,554 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. after waiting 0 ms 2023-07-14 17:13:32,554 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. 2023-07-14 17:13:32,554 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. 2023-07-14 17:13:32,554 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for f9434bc3110cf1c29610cbaaa78c2a02: 2023-07-14 17:13:32,555 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-14 17:13:32,559 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 773f58cde6eff004015f5064f08a8726, NAME => 'hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:32,561 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:13:32,585 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:32,586 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 773f58cde6eff004015f5064f08a8726, disabling compactions & flushes 2023-07-14 17:13:32,586 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:32,586 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:32,587 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. after waiting 0 ms 2023-07-14 17:13:32,587 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:32,587 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:32,587 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 773f58cde6eff004015f5064f08a8726: 2023-07-14 17:13:32,588 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689354812565"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354812565"}]},"ts":"1689354812565"} 2023-07-14 17:13:32,591 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:13:32,592 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689354812592"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354812592"}]},"ts":"1689354812592"} 2023-07-14 17:13:32,615 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:13:32,617 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:13:32,617 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:13:32,619 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:13:32,623 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354812619"}]},"ts":"1689354812619"} 2023-07-14 17:13:32,623 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354812618"}]},"ts":"1689354812618"} 2023-07-14 17:13:32,627 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-14 17:13:32,629 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-14 17:13:32,631 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:32,631 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:32,631 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:32,631 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:13:32,631 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:32,633 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:32,633 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:32,633 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:32,633 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:13:32,633 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:32,634 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f9434bc3110cf1c29610cbaaa78c2a02, ASSIGN}] 2023-07-14 17:13:32,634 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=773f58cde6eff004015f5064f08a8726, ASSIGN}] 2023-07-14 17:13:32,637 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f9434bc3110cf1c29610cbaaa78c2a02, ASSIGN 2023-07-14 17:13:32,637 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=773f58cde6eff004015f5064f08a8726, ASSIGN 2023-07-14 17:13:32,639 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=773f58cde6eff004015f5064f08a8726, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44093,1689354809062; forceNewPlan=false, retain=false 2023-07-14 17:13:32,639 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f9434bc3110cf1c29610cbaaa78c2a02, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46457,1689354809303; forceNewPlan=false, retain=false 2023-07-14 17:13:32,640 INFO [jenkins-hbase20:41281] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-14 17:13:32,642 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=f9434bc3110cf1c29610cbaaa78c2a02, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:32,642 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=773f58cde6eff004015f5064f08a8726, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:32,643 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689354812642"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354812642"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354812642"}]},"ts":"1689354812642"} 2023-07-14 17:13:32,643 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689354812642"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354812642"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354812642"}]},"ts":"1689354812642"} 2023-07-14 17:13:32,647 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure 773f58cde6eff004015f5064f08a8726, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:32,650 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure f9434bc3110cf1c29610cbaaa78c2a02, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:32,802 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:32,802 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 17:13:32,804 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:32,805 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 17:13:32,807 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38866, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 17:13:32,808 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60756, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 17:13:32,816 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. 2023-07-14 17:13:32,816 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f9434bc3110cf1c29610cbaaa78c2a02, NAME => 'hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:32,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 17:13:32,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. service=MultiRowMutationService 2023-07-14 17:13:32,821 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-14 17:13:32,822 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f9434bc3110cf1c29610cbaaa78c2a02 2023-07-14 17:13:32,822 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:32,822 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:32,822 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f9434bc3110cf1c29610cbaaa78c2a02 2023-07-14 17:13:32,822 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f9434bc3110cf1c29610cbaaa78c2a02 2023-07-14 17:13:32,822 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 773f58cde6eff004015f5064f08a8726, NAME => 'hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:32,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:32,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:32,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:32,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:32,825 INFO [StoreOpener-f9434bc3110cf1c29610cbaaa78c2a02-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f9434bc3110cf1c29610cbaaa78c2a02 2023-07-14 17:13:32,825 INFO [StoreOpener-773f58cde6eff004015f5064f08a8726-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:32,827 DEBUG [StoreOpener-f9434bc3110cf1c29610cbaaa78c2a02-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/rsgroup/f9434bc3110cf1c29610cbaaa78c2a02/m 2023-07-14 17:13:32,828 DEBUG [StoreOpener-f9434bc3110cf1c29610cbaaa78c2a02-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/rsgroup/f9434bc3110cf1c29610cbaaa78c2a02/m 2023-07-14 17:13:32,828 INFO [StoreOpener-f9434bc3110cf1c29610cbaaa78c2a02-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f9434bc3110cf1c29610cbaaa78c2a02 columnFamilyName m 2023-07-14 17:13:32,829 INFO [StoreOpener-f9434bc3110cf1c29610cbaaa78c2a02-1] regionserver.HStore(310): Store=f9434bc3110cf1c29610cbaaa78c2a02/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:32,830 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/rsgroup/f9434bc3110cf1c29610cbaaa78c2a02 2023-07-14 17:13:32,832 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/rsgroup/f9434bc3110cf1c29610cbaaa78c2a02 2023-07-14 17:13:32,833 DEBUG [StoreOpener-773f58cde6eff004015f5064f08a8726-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726/info 2023-07-14 17:13:32,833 DEBUG [StoreOpener-773f58cde6eff004015f5064f08a8726-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726/info 2023-07-14 17:13:32,833 INFO [StoreOpener-773f58cde6eff004015f5064f08a8726-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 773f58cde6eff004015f5064f08a8726 columnFamilyName info 2023-07-14 17:13:32,834 INFO [StoreOpener-773f58cde6eff004015f5064f08a8726-1] regionserver.HStore(310): Store=773f58cde6eff004015f5064f08a8726/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:32,835 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:32,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:32,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f9434bc3110cf1c29610cbaaa78c2a02 2023-07-14 17:13:32,841 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/rsgroup/f9434bc3110cf1c29610cbaaa78c2a02/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:32,842 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f9434bc3110cf1c29610cbaaa78c2a02; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@63bf5fe6, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:32,842 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f9434bc3110cf1c29610cbaaa78c2a02: 2023-07-14 17:13:32,844 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02., pid=9, masterSystemTime=1689354812804 2023-07-14 17:13:32,847 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:32,850 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. 2023-07-14 17:13:32,851 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. 2023-07-14 17:13:32,852 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=f9434bc3110cf1c29610cbaaa78c2a02, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:32,853 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689354812852"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354812852"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354812852"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354812852"}]},"ts":"1689354812852"} 2023-07-14 17:13:32,854 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:32,855 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 773f58cde6eff004015f5064f08a8726; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9724861600, jitterRate=-0.09430168569087982}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:32,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 773f58cde6eff004015f5064f08a8726: 2023-07-14 17:13:32,856 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726., pid=8, masterSystemTime=1689354812802 2023-07-14 17:13:32,862 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:32,862 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:32,863 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=773f58cde6eff004015f5064f08a8726, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:32,864 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689354812863"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354812863"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354812863"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354812863"}]},"ts":"1689354812863"} 2023-07-14 17:13:32,866 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-14 17:13:32,866 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure f9434bc3110cf1c29610cbaaa78c2a02, server=jenkins-hbase20.apache.org,46457,1689354809303 in 211 msec 2023-07-14 17:13:32,871 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-14 17:13:32,872 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f9434bc3110cf1c29610cbaaa78c2a02, ASSIGN in 232 msec 2023-07-14 17:13:32,873 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-14 17:13:32,873 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:13:32,873 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure 773f58cde6eff004015f5064f08a8726, server=jenkins-hbase20.apache.org,44093,1689354809062 in 221 msec 2023-07-14 17:13:32,873 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354812873"}]},"ts":"1689354812873"} 2023-07-14 17:13:32,877 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-14 17:13:32,877 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-14 17:13:32,878 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=773f58cde6eff004015f5064f08a8726, ASSIGN in 239 msec 2023-07-14 17:13:32,879 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:13:32,879 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354812879"}]},"ts":"1689354812879"} 2023-07-14 17:13:32,880 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:13:32,881 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-14 17:13:32,883 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 460 msec 2023-07-14 17:13:32,884 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:13:32,887 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 398 msec 2023-07-14 17:13:32,891 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-14 17:13:32,892 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-14 17:13:32,892 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:13:32,914 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:13:32,918 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38872, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:13:32,934 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41281,1689354806808] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:13:32,934 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-14 17:13:32,937 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60766, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:13:32,940 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-14 17:13:32,940 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-14 17:13:32,961 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 17:13:32,968 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 44 msec 2023-07-14 17:13:32,971 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-14 17:13:32,982 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 17:13:32,989 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 16 msec 2023-07-14 17:13:32,999 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-14 17:13:33,000 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-14 17:13:33,000 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.565sec 2023-07-14 17:13:33,004 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-14 17:13:33,006 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-14 17:13:33,006 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-14 17:13:33,008 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41281,1689354806808-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-14 17:13:33,008 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41281,1689354806808-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-14 17:13:33,009 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ReadOnlyZKClient(139): Connect 0x41299102 to 127.0.0.1:54612 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:13:33,024 DEBUG [Listener at localhost.localdomain/41607] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@23539661, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:13:33,034 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:13:33,034 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:33,038 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-14 17:13:33,043 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 17:13:33,051 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-14 17:13:33,052 DEBUG [hconnection-0x37307bc-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:13:33,067 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:49540, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:13:33,076 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,41281,1689354806808 2023-07-14 17:13:33,077 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:33,087 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-14 17:13:33,091 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:33882, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-14 17:13:33,103 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-14 17:13:33,103 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:13:33,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=false 2023-07-14 17:13:33,108 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ReadOnlyZKClient(139): Connect 0x622a836f to 127.0.0.1:54612 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:13:33,118 DEBUG [Listener at localhost.localdomain/41607] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@422b55da, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:13:33,118 INFO [Listener at localhost.localdomain/41607] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:54612 2023-07-14 17:13:33,122 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:13:33,124 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1008c792048000a connected 2023-07-14 17:13:33,152 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=423, OpenFileDescriptor=697, MaxFileDescriptor=60000, SystemLoadAverage=572, ProcessCount=173, AvailableMemoryMB=4146 2023-07-14 17:13:33,154 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-14 17:13:33,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:33,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:33,219 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-14 17:13:33,230 INFO [Listener at localhost.localdomain/41607] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:13:33,231 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:33,231 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:33,231 INFO [Listener at localhost.localdomain/41607] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:13:33,231 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:33,231 INFO [Listener at localhost.localdomain/41607] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:13:33,231 INFO [Listener at localhost.localdomain/41607] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:13:33,235 INFO [Listener at localhost.localdomain/41607] ipc.NettyRpcServer(120): Bind to /148.251.75.209:38517 2023-07-14 17:13:33,236 INFO [Listener at localhost.localdomain/41607] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 17:13:33,237 DEBUG [Listener at localhost.localdomain/41607] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 17:13:33,238 INFO [Listener at localhost.localdomain/41607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:33,242 INFO [Listener at localhost.localdomain/41607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:33,245 INFO [Listener at localhost.localdomain/41607] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38517 connecting to ZooKeeper ensemble=127.0.0.1:54612 2023-07-14 17:13:33,293 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:385170x0, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:13:33,295 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(162): regionserver:385170x0, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 17:13:33,295 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38517-0x1008c792048000b connected 2023-07-14 17:13:33,296 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(162): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-14 17:13:33,297 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ZKUtil(164): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:13:33,298 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38517 2023-07-14 17:13:33,298 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38517 2023-07-14 17:13:33,301 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38517 2023-07-14 17:13:33,302 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38517 2023-07-14 17:13:33,302 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38517 2023-07-14 17:13:33,305 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:13:33,305 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:13:33,305 INFO [Listener at localhost.localdomain/41607] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:13:33,305 INFO [Listener at localhost.localdomain/41607] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 17:13:33,305 INFO [Listener at localhost.localdomain/41607] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:13:33,306 INFO [Listener at localhost.localdomain/41607] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:13:33,306 INFO [Listener at localhost.localdomain/41607] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:13:33,306 INFO [Listener at localhost.localdomain/41607] http.HttpServer(1146): Jetty bound to port 36911 2023-07-14 17:13:33,306 INFO [Listener at localhost.localdomain/41607] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:13:33,311 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:33,311 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@34e957d0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:13:33,311 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:33,312 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4807f72c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:13:33,319 INFO [Listener at localhost.localdomain/41607] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:13:33,319 INFO [Listener at localhost.localdomain/41607] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:13:33,319 INFO [Listener at localhost.localdomain/41607] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:13:33,320 INFO [Listener at localhost.localdomain/41607] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-14 17:13:33,321 INFO [Listener at localhost.localdomain/41607] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:33,322 INFO [Listener at localhost.localdomain/41607] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3bc525f6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:13:33,324 INFO [Listener at localhost.localdomain/41607] server.AbstractConnector(333): Started ServerConnector@4d21a747{HTTP/1.1, (http/1.1)}{0.0.0.0:36911} 2023-07-14 17:13:33,324 INFO [Listener at localhost.localdomain/41607] server.Server(415): Started @11973ms 2023-07-14 17:13:33,329 INFO [RS:3;jenkins-hbase20:38517] regionserver.HRegionServer(951): ClusterId : 541b1292-07c3-43b8-bf41-59fb9df0a64c 2023-07-14 17:13:33,329 DEBUG [RS:3;jenkins-hbase20:38517] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 17:13:33,332 DEBUG [RS:3;jenkins-hbase20:38517] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 17:13:33,332 DEBUG [RS:3;jenkins-hbase20:38517] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 17:13:33,334 DEBUG [RS:3;jenkins-hbase20:38517] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 17:13:33,336 DEBUG [RS:3;jenkins-hbase20:38517] zookeeper.ReadOnlyZKClient(139): Connect 0x6f382546 to 127.0.0.1:54612 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:13:33,342 DEBUG [RS:3;jenkins-hbase20:38517] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61ffc6d1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:13:33,343 DEBUG [RS:3;jenkins-hbase20:38517] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@65c04e36, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:13:33,350 DEBUG [RS:3;jenkins-hbase20:38517] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase20:38517 2023-07-14 17:13:33,350 INFO [RS:3;jenkins-hbase20:38517] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 17:13:33,350 INFO [RS:3;jenkins-hbase20:38517] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 17:13:33,350 DEBUG [RS:3;jenkins-hbase20:38517] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 17:13:33,351 INFO [RS:3;jenkins-hbase20:38517] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,41281,1689354806808 with isa=jenkins-hbase20.apache.org/148.251.75.209:38517, startcode=1689354813230 2023-07-14 17:13:33,351 DEBUG [RS:3;jenkins-hbase20:38517] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 17:13:33,356 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55113, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 17:13:33,357 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41281] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:33,357 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 17:13:33,358 DEBUG [RS:3;jenkins-hbase20:38517] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a 2023-07-14 17:13:33,358 DEBUG [RS:3;jenkins-hbase20:38517] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37685 2023-07-14 17:13:33,358 DEBUG [RS:3;jenkins-hbase20:38517] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39513 2023-07-14 17:13:33,362 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:13:33,362 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:13:33,362 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:13:33,362 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:13:33,363 DEBUG [RS:3;jenkins-hbase20:38517] zookeeper.ZKUtil(162): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:33,363 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,38517,1689354813230] 2023-07-14 17:13:33,363 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:33,363 WARN [RS:3;jenkins-hbase20:38517] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:13:33,363 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:33,363 INFO [RS:3;jenkins-hbase20:38517] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:13:33,363 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:33,363 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:33,363 DEBUG [RS:3;jenkins-hbase20:38517] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:33,364 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 17:13:33,364 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:33,364 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:33,364 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:33,375 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,41281,1689354806808] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-14 17:13:33,375 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:33,375 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:33,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:33,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:33,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:33,377 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:33,382 DEBUG [RS:3;jenkins-hbase20:38517] zookeeper.ZKUtil(162): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:33,382 DEBUG [RS:3;jenkins-hbase20:38517] zookeeper.ZKUtil(162): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:33,383 DEBUG [RS:3;jenkins-hbase20:38517] zookeeper.ZKUtil(162): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:33,383 DEBUG [RS:3;jenkins-hbase20:38517] zookeeper.ZKUtil(162): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:33,384 DEBUG [RS:3;jenkins-hbase20:38517] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 17:13:33,385 INFO [RS:3;jenkins-hbase20:38517] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 17:13:33,396 INFO [RS:3;jenkins-hbase20:38517] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 17:13:33,396 INFO [RS:3;jenkins-hbase20:38517] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 17:13:33,396 INFO [RS:3;jenkins-hbase20:38517] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:33,397 INFO [RS:3;jenkins-hbase20:38517] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 17:13:33,399 INFO [RS:3;jenkins-hbase20:38517] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:33,399 DEBUG [RS:3;jenkins-hbase20:38517] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:33,399 DEBUG [RS:3;jenkins-hbase20:38517] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:33,399 DEBUG [RS:3;jenkins-hbase20:38517] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:33,400 DEBUG [RS:3;jenkins-hbase20:38517] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:33,400 DEBUG [RS:3;jenkins-hbase20:38517] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:33,400 DEBUG [RS:3;jenkins-hbase20:38517] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:13:33,400 DEBUG [RS:3;jenkins-hbase20:38517] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:33,400 DEBUG [RS:3;jenkins-hbase20:38517] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:33,400 DEBUG [RS:3;jenkins-hbase20:38517] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:33,400 DEBUG [RS:3;jenkins-hbase20:38517] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:13:33,402 INFO [RS:3;jenkins-hbase20:38517] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:33,402 INFO [RS:3;jenkins-hbase20:38517] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:33,402 INFO [RS:3;jenkins-hbase20:38517] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:33,414 INFO [RS:3;jenkins-hbase20:38517] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 17:13:33,414 INFO [RS:3;jenkins-hbase20:38517] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38517,1689354813230-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:13:33,424 INFO [RS:3;jenkins-hbase20:38517] regionserver.Replication(203): jenkins-hbase20.apache.org,38517,1689354813230 started 2023-07-14 17:13:33,424 INFO [RS:3;jenkins-hbase20:38517] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,38517,1689354813230, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:38517, sessionid=0x1008c792048000b 2023-07-14 17:13:33,424 DEBUG [RS:3;jenkins-hbase20:38517] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 17:13:33,424 DEBUG [RS:3;jenkins-hbase20:38517] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:33,424 DEBUG [RS:3;jenkins-hbase20:38517] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38517,1689354813230' 2023-07-14 17:13:33,424 DEBUG [RS:3;jenkins-hbase20:38517] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 17:13:33,425 DEBUG [RS:3;jenkins-hbase20:38517] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 17:13:33,426 DEBUG [RS:3;jenkins-hbase20:38517] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 17:13:33,426 DEBUG [RS:3;jenkins-hbase20:38517] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 17:13:33,426 DEBUG [RS:3;jenkins-hbase20:38517] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:33,426 DEBUG [RS:3;jenkins-hbase20:38517] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38517,1689354813230' 2023-07-14 17:13:33,426 DEBUG [RS:3;jenkins-hbase20:38517] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 17:13:33,426 DEBUG [RS:3;jenkins-hbase20:38517] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 17:13:33,426 DEBUG [RS:3;jenkins-hbase20:38517] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 17:13:33,426 INFO [RS:3;jenkins-hbase20:38517] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 17:13:33,426 INFO [RS:3;jenkins-hbase20:38517] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 17:13:33,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:33,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:33,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:33,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:33,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:33,476 DEBUG [hconnection-0x2d4e8d6d-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:13:33,488 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:49556, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:13:33,493 DEBUG [hconnection-0x2d4e8d6d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:13:33,496 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60774, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:13:33,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:33,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:33,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:33,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:33,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:33882 deadline: 1689356013508, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:33,510 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:33,512 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:33,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:33,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:33,514 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:33,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:33,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:33,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:33,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:33,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:33,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:33,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:33,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:33,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:33,531 INFO [RS:3;jenkins-hbase20:38517] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38517%2C1689354813230, suffix=, logDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,38517,1689354813230, archiveDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs, maxLogs=32 2023-07-14 17:13:33,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:33,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:33,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:33,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361] to rsgroup Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:33,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:33,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:33,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:33,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:33,562 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK] 2023-07-14 17:13:33,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:33,562 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK] 2023-07-14 17:13:33,563 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK] 2023-07-14 17:13:33,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:33,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:33,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:33,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:13:33,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:33,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-14 17:13:33,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-14 17:13:33,571 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-14 17:13:33,572 INFO [RS:3;jenkins-hbase20:38517] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,38517,1689354813230/jenkins-hbase20.apache.org%2C38517%2C1689354813230.1689354813533 2023-07-14 17:13:33,573 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,42361,1689354809221, state=CLOSING 2023-07-14 17:13:33,573 DEBUG [RS:3;jenkins-hbase20:38517] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK], DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK], DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK]] 2023-07-14 17:13:33,575 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 17:13:33,575 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 17:13:33,575 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:33,742 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-14 17:13:33,743 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 17:13:33,743 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 17:13:33,743 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 17:13:33,743 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 17:13:33,743 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 17:13:33,744 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.49 KB heapSize=5 KB 2023-07-14 17:13:33,831 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.31 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/info/0e82c32b62ac4c98992d2d3a10a44f24 2023-07-14 17:13:34,313 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/table/3460883fcabf46ff8c029dc735128b22 2023-07-14 17:13:34,323 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/info/0e82c32b62ac4c98992d2d3a10a44f24 as hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info/0e82c32b62ac4c98992d2d3a10a44f24 2023-07-14 17:13:34,335 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info/0e82c32b62ac4c98992d2d3a10a44f24, entries=20, sequenceid=14, filesize=7.0 K 2023-07-14 17:13:34,338 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/table/3460883fcabf46ff8c029dc735128b22 as hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table/3460883fcabf46ff8c029dc735128b22 2023-07-14 17:13:34,347 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table/3460883fcabf46ff8c029dc735128b22, entries=4, sequenceid=14, filesize=4.8 K 2023-07-14 17:13:34,350 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.49 KB/2554, heapSize ~4.72 KB/4832, currentSize=0 B/0 for 1588230740 in 606ms, sequenceid=14, compaction requested=false 2023-07-14 17:13:34,352 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-14 17:13:34,368 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-07-14 17:13:34,370 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 17:13:34,371 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 17:13:34,371 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 17:13:34,371 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase20.apache.org,44093,1689354809062 record at close sequenceid=14 2023-07-14 17:13:34,373 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-14 17:13:34,374 WARN [PEWorker-1] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-14 17:13:34,379 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-14 17:13:34,379 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,42361,1689354809221 in 799 msec 2023-07-14 17:13:34,381 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44093,1689354809062; forceNewPlan=false, retain=false 2023-07-14 17:13:34,532 INFO [jenkins-hbase20:41281] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 17:13:34,532 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44093,1689354809062, state=OPENING 2023-07-14 17:13:34,533 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 17:13:34,533 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 17:13:34,533 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:34,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-14 17:13:34,698 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-14 17:13:34,698 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:13:34,701 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44093%2C1689354809062.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,44093,1689354809062, archiveDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs, maxLogs=32 2023-07-14 17:13:34,731 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK] 2023-07-14 17:13:34,731 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK] 2023-07-14 17:13:34,745 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK] 2023-07-14 17:13:34,751 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,44093,1689354809062/jenkins-hbase20.apache.org%2C44093%2C1689354809062.meta.1689354814703.meta 2023-07-14 17:13:34,751 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK], DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK], DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK]] 2023-07-14 17:13:34,751 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:34,752 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 17:13:34,752 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-14 17:13:34,752 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-14 17:13:34,752 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-14 17:13:34,752 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:34,752 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-14 17:13:34,752 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-14 17:13:34,754 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 17:13:34,755 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info 2023-07-14 17:13:34,755 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info 2023-07-14 17:13:34,756 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 17:13:34,766 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info/0e82c32b62ac4c98992d2d3a10a44f24 2023-07-14 17:13:34,767 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:34,767 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 17:13:34,769 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/rep_barrier 2023-07-14 17:13:34,769 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/rep_barrier 2023-07-14 17:13:34,769 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 17:13:34,770 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:34,770 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 17:13:34,772 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table 2023-07-14 17:13:34,772 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table 2023-07-14 17:13:34,772 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 17:13:34,786 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table/3460883fcabf46ff8c029dc735128b22 2023-07-14 17:13:34,786 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:34,788 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740 2023-07-14 17:13:34,792 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740 2023-07-14 17:13:34,796 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 17:13:34,798 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 17:13:34,799 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=18; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9913687680, jitterRate=-0.07671588659286499}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 17:13:34,799 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 17:13:34,801 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=14, masterSystemTime=1689354814687 2023-07-14 17:13:34,803 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-14 17:13:34,803 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-14 17:13:34,804 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44093,1689354809062, state=OPEN 2023-07-14 17:13:34,806 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 17:13:34,806 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 17:13:34,810 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-14 17:13:34,810 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44093,1689354809062 in 273 msec 2023-07-14 17:13:34,817 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 1.2450 sec 2023-07-14 17:13:35,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,38517,1689354813230, jenkins-hbase20.apache.org,42361,1689354809221] are moved back to default 2023-07-14 17:13:35,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:35,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:35,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:35,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:35,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:35,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:35,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:13:35,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:35,594 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:13:35,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-14 17:13:35,606 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:35,607 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:35,607 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:35,608 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:35,622 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:13:35,624 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42361] ipc.CallRunner(144): callId: 39 service: ClientService methodName: Get size: 151 connection: 148.251.75.209:49530 deadline: 1689354875623, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44093 startCode=1689354809062. As of locationSeqNum=14. 2023-07-14 17:13:35,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-14 17:13:35,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-14 17:13:35,739 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:35,740 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2 empty. 2023-07-14 17:13:35,741 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:35,743 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:35,750 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:35,751 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9 empty. 2023-07-14 17:13:35,752 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0 empty. 2023-07-14 17:13:35,752 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:35,758 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae 2023-07-14 17:13:35,763 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:35,764 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae empty. 2023-07-14 17:13:35,765 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae 2023-07-14 17:13:35,770 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:35,772 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6 empty. 2023-07-14 17:13:35,773 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:35,773 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-14 17:13:35,827 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-14 17:13:35,829 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 96fb343f7f9d9c07b808aaf833d776d2, NAME => 'Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:35,835 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 42014b8beb42dd7149481be1fd826fb9, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:35,840 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 06199b5c9943b1c3c8422ea2364265f0, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:35,926 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:35,936 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 96fb343f7f9d9c07b808aaf833d776d2, disabling compactions & flushes 2023-07-14 17:13:35,936 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:35,936 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:35,936 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. after waiting 0 ms 2023-07-14 17:13:35,936 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:35,936 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:35,936 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 96fb343f7f9d9c07b808aaf833d776d2: 2023-07-14 17:13:35,937 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 332da47a9120285a896fd3916c926dae, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:35,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-14 17:13:35,951 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:35,957 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 42014b8beb42dd7149481be1fd826fb9, disabling compactions & flushes 2023-07-14 17:13:35,957 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:35,957 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:35,957 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. after waiting 0 ms 2023-07-14 17:13:35,958 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:35,958 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:35,958 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 42014b8beb42dd7149481be1fd826fb9: 2023-07-14 17:13:35,959 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => f55ac5dc07100264756a86ce837d79e6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:35,959 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:35,959 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 06199b5c9943b1c3c8422ea2364265f0, disabling compactions & flushes 2023-07-14 17:13:35,960 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:35,960 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:35,960 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. after waiting 0 ms 2023-07-14 17:13:35,960 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:35,960 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:35,960 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 06199b5c9943b1c3c8422ea2364265f0: 2023-07-14 17:13:35,985 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:35,989 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 332da47a9120285a896fd3916c926dae, disabling compactions & flushes 2023-07-14 17:13:35,989 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:35,989 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:35,989 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. after waiting 0 ms 2023-07-14 17:13:35,989 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:35,989 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:35,989 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 332da47a9120285a896fd3916c926dae: 2023-07-14 17:13:35,993 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:35,993 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing f55ac5dc07100264756a86ce837d79e6, disabling compactions & flushes 2023-07-14 17:13:35,993 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:35,993 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:35,993 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. after waiting 0 ms 2023-07-14 17:13:35,994 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:35,994 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:35,994 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for f55ac5dc07100264756a86ce837d79e6: 2023-07-14 17:13:36,001 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:13:36,003 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354816002"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354816002"}]},"ts":"1689354816002"} 2023-07-14 17:13:36,003 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354816002"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354816002"}]},"ts":"1689354816002"} 2023-07-14 17:13:36,003 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354816002"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354816002"}]},"ts":"1689354816002"} 2023-07-14 17:13:36,003 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354815586.332da47a9120285a896fd3916c926dae.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354816002"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354816002"}]},"ts":"1689354816002"} 2023-07-14 17:13:36,004 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354816002"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354816002"}]},"ts":"1689354816002"} 2023-07-14 17:13:36,059 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-14 17:13:36,062 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:13:36,062 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354816062"}]},"ts":"1689354816062"} 2023-07-14 17:13:36,065 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-14 17:13:36,069 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:36,070 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:36,070 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:36,070 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:36,071 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=96fb343f7f9d9c07b808aaf833d776d2, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=42014b8beb42dd7149481be1fd826fb9, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=06199b5c9943b1c3c8422ea2364265f0, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=332da47a9120285a896fd3916c926dae, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f55ac5dc07100264756a86ce837d79e6, ASSIGN}] 2023-07-14 17:13:36,075 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=42014b8beb42dd7149481be1fd826fb9, ASSIGN 2023-07-14 17:13:36,075 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=06199b5c9943b1c3c8422ea2364265f0, ASSIGN 2023-07-14 17:13:36,077 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=96fb343f7f9d9c07b808aaf833d776d2, ASSIGN 2023-07-14 17:13:36,077 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=332da47a9120285a896fd3916c926dae, ASSIGN 2023-07-14 17:13:36,079 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=42014b8beb42dd7149481be1fd826fb9, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46457,1689354809303; forceNewPlan=false, retain=false 2023-07-14 17:13:36,079 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=96fb343f7f9d9c07b808aaf833d776d2, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44093,1689354809062; forceNewPlan=false, retain=false 2023-07-14 17:13:36,079 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f55ac5dc07100264756a86ce837d79e6, ASSIGN 2023-07-14 17:13:36,079 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=06199b5c9943b1c3c8422ea2364265f0, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44093,1689354809062; forceNewPlan=false, retain=false 2023-07-14 17:13:36,079 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=332da47a9120285a896fd3916c926dae, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46457,1689354809303; forceNewPlan=false, retain=false 2023-07-14 17:13:36,082 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f55ac5dc07100264756a86ce837d79e6, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44093,1689354809062; forceNewPlan=false, retain=false 2023-07-14 17:13:36,230 INFO [jenkins-hbase20:41281] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-14 17:13:36,237 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=332da47a9120285a896fd3916c926dae, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:36,237 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354815586.332da47a9120285a896fd3916c926dae.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354816236"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354816236"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354816236"}]},"ts":"1689354816236"} 2023-07-14 17:13:36,238 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=f55ac5dc07100264756a86ce837d79e6, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:36,238 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=42014b8beb42dd7149481be1fd826fb9, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:36,238 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354816238"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354816238"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354816238"}]},"ts":"1689354816238"} 2023-07-14 17:13:36,238 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354816238"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354816238"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354816238"}]},"ts":"1689354816238"} 2023-07-14 17:13:36,238 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=06199b5c9943b1c3c8422ea2364265f0, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:36,239 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354816238"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354816238"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354816238"}]},"ts":"1689354816238"} 2023-07-14 17:13:36,239 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=96fb343f7f9d9c07b808aaf833d776d2, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:36,239 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354816239"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354816239"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354816239"}]},"ts":"1689354816239"} 2023-07-14 17:13:36,242 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=19, state=RUNNABLE; OpenRegionProcedure 332da47a9120285a896fd3916c926dae, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:36,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-14 17:13:36,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=20, state=RUNNABLE; OpenRegionProcedure f55ac5dc07100264756a86ce837d79e6, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:36,252 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=17, state=RUNNABLE; OpenRegionProcedure 42014b8beb42dd7149481be1fd826fb9, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:36,259 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=18, state=RUNNABLE; OpenRegionProcedure 06199b5c9943b1c3c8422ea2364265f0, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:36,259 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=16, state=RUNNABLE; OpenRegionProcedure 96fb343f7f9d9c07b808aaf833d776d2, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:36,411 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:36,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 332da47a9120285a896fd3916c926dae, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-14 17:13:36,411 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:36,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:36,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 06199b5c9943b1c3c8422ea2364265f0, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-14 17:13:36,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:36,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:36,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:36,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:36,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:36,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:36,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:36,439 INFO [StoreOpener-332da47a9120285a896fd3916c926dae-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:36,450 DEBUG [StoreOpener-332da47a9120285a896fd3916c926dae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae/f 2023-07-14 17:13:36,451 DEBUG [StoreOpener-332da47a9120285a896fd3916c926dae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae/f 2023-07-14 17:13:36,455 INFO [StoreOpener-332da47a9120285a896fd3916c926dae-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 332da47a9120285a896fd3916c926dae columnFamilyName f 2023-07-14 17:13:36,456 INFO [StoreOpener-332da47a9120285a896fd3916c926dae-1] regionserver.HStore(310): Store=332da47a9120285a896fd3916c926dae/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:36,459 INFO [StoreOpener-06199b5c9943b1c3c8422ea2364265f0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:36,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae 2023-07-14 17:13:36,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae 2023-07-14 17:13:36,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:36,475 DEBUG [StoreOpener-06199b5c9943b1c3c8422ea2364265f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0/f 2023-07-14 17:13:36,475 DEBUG [StoreOpener-06199b5c9943b1c3c8422ea2364265f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0/f 2023-07-14 17:13:36,476 INFO [StoreOpener-06199b5c9943b1c3c8422ea2364265f0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 06199b5c9943b1c3c8422ea2364265f0 columnFamilyName f 2023-07-14 17:13:36,477 INFO [StoreOpener-06199b5c9943b1c3c8422ea2364265f0-1] regionserver.HStore(310): Store=06199b5c9943b1c3c8422ea2364265f0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:36,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:36,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:36,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:36,485 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 332da47a9120285a896fd3916c926dae; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10503676000, jitterRate=-0.02176894247531891}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:36,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 332da47a9120285a896fd3916c926dae: 2023-07-14 17:13:36,495 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae., pid=21, masterSystemTime=1689354816396 2023-07-14 17:13:36,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:36,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:36,501 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:36,501 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:36,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 42014b8beb42dd7149481be1fd826fb9, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-14 17:13:36,502 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=332da47a9120285a896fd3916c926dae, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:36,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:36,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:36,502 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354815586.332da47a9120285a896fd3916c926dae.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354816502"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354816502"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354816502"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354816502"}]},"ts":"1689354816502"} 2023-07-14 17:13:36,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:36,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:36,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:36,530 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 06199b5c9943b1c3c8422ea2364265f0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9728110720, jitterRate=-0.09399908781051636}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:36,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 06199b5c9943b1c3c8422ea2364265f0: 2023-07-14 17:13:36,539 INFO [StoreOpener-42014b8beb42dd7149481be1fd826fb9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:36,539 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=19 2023-07-14 17:13:36,539 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=19, state=SUCCESS; OpenRegionProcedure 332da47a9120285a896fd3916c926dae, server=jenkins-hbase20.apache.org,46457,1689354809303 in 265 msec 2023-07-14 17:13:36,540 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0., pid=24, masterSystemTime=1689354816401 2023-07-14 17:13:36,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:36,544 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:36,544 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:36,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 96fb343f7f9d9c07b808aaf833d776d2, NAME => 'Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-14 17:13:36,544 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=332da47a9120285a896fd3916c926dae, ASSIGN in 468 msec 2023-07-14 17:13:36,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:36,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:36,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:36,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:36,545 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=06199b5c9943b1c3c8422ea2364265f0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:36,545 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354816545"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354816545"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354816545"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354816545"}]},"ts":"1689354816545"} 2023-07-14 17:13:36,546 DEBUG [StoreOpener-42014b8beb42dd7149481be1fd826fb9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9/f 2023-07-14 17:13:36,546 DEBUG [StoreOpener-42014b8beb42dd7149481be1fd826fb9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9/f 2023-07-14 17:13:36,547 INFO [StoreOpener-42014b8beb42dd7149481be1fd826fb9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 42014b8beb42dd7149481be1fd826fb9 columnFamilyName f 2023-07-14 17:13:36,548 INFO [StoreOpener-96fb343f7f9d9c07b808aaf833d776d2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:36,550 INFO [StoreOpener-42014b8beb42dd7149481be1fd826fb9-1] regionserver.HStore(310): Store=42014b8beb42dd7149481be1fd826fb9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:36,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:36,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:36,560 DEBUG [StoreOpener-96fb343f7f9d9c07b808aaf833d776d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2/f 2023-07-14 17:13:36,560 DEBUG [StoreOpener-96fb343f7f9d9c07b808aaf833d776d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2/f 2023-07-14 17:13:36,565 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:36,566 INFO [StoreOpener-96fb343f7f9d9c07b808aaf833d776d2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 96fb343f7f9d9c07b808aaf833d776d2 columnFamilyName f 2023-07-14 17:13:36,569 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=18 2023-07-14 17:13:36,569 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=18, state=SUCCESS; OpenRegionProcedure 06199b5c9943b1c3c8422ea2364265f0, server=jenkins-hbase20.apache.org,44093,1689354809062 in 299 msec 2023-07-14 17:13:36,577 INFO [StoreOpener-96fb343f7f9d9c07b808aaf833d776d2-1] regionserver.HStore(310): Store=96fb343f7f9d9c07b808aaf833d776d2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:36,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:36,579 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=06199b5c9943b1c3c8422ea2364265f0, ASSIGN in 499 msec 2023-07-14 17:13:36,580 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 42014b8beb42dd7149481be1fd826fb9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9725547040, jitterRate=-0.09423784911632538}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:36,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 42014b8beb42dd7149481be1fd826fb9: 2023-07-14 17:13:36,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:36,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:36,581 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9., pid=23, masterSystemTime=1689354816396 2023-07-14 17:13:36,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:36,584 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:36,586 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=42014b8beb42dd7149481be1fd826fb9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:36,586 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354816586"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354816586"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354816586"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354816586"}]},"ts":"1689354816586"} 2023-07-14 17:13:36,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:36,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:36,592 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 96fb343f7f9d9c07b808aaf833d776d2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11144348160, jitterRate=0.03789830207824707}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:36,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 96fb343f7f9d9c07b808aaf833d776d2: 2023-07-14 17:13:36,593 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=17 2023-07-14 17:13:36,595 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=17, state=SUCCESS; OpenRegionProcedure 42014b8beb42dd7149481be1fd826fb9, server=jenkins-hbase20.apache.org,46457,1689354809303 in 337 msec 2023-07-14 17:13:36,596 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2., pid=25, masterSystemTime=1689354816401 2023-07-14 17:13:36,599 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=42014b8beb42dd7149481be1fd826fb9, ASSIGN in 524 msec 2023-07-14 17:13:36,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:36,599 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:36,599 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:36,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f55ac5dc07100264756a86ce837d79e6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-14 17:13:36,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:36,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:36,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:36,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:36,601 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=96fb343f7f9d9c07b808aaf833d776d2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:36,601 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354816601"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354816601"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354816601"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354816601"}]},"ts":"1689354816601"} 2023-07-14 17:13:36,602 INFO [StoreOpener-f55ac5dc07100264756a86ce837d79e6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:36,607 DEBUG [StoreOpener-f55ac5dc07100264756a86ce837d79e6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6/f 2023-07-14 17:13:36,607 DEBUG [StoreOpener-f55ac5dc07100264756a86ce837d79e6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6/f 2023-07-14 17:13:36,608 INFO [StoreOpener-f55ac5dc07100264756a86ce837d79e6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f55ac5dc07100264756a86ce837d79e6 columnFamilyName f 2023-07-14 17:13:36,609 INFO [StoreOpener-f55ac5dc07100264756a86ce837d79e6-1] regionserver.HStore(310): Store=f55ac5dc07100264756a86ce837d79e6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:36,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:36,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:36,617 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=16 2023-07-14 17:13:36,618 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=16, state=SUCCESS; OpenRegionProcedure 96fb343f7f9d9c07b808aaf833d776d2, server=jenkins-hbase20.apache.org,44093,1689354809062 in 347 msec 2023-07-14 17:13:36,618 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:36,622 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:36,623 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f55ac5dc07100264756a86ce837d79e6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9763833760, jitterRate=-0.09067212045192719}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:36,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f55ac5dc07100264756a86ce837d79e6: 2023-07-14 17:13:36,629 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6., pid=22, masterSystemTime=1689354816401 2023-07-14 17:13:36,630 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=96fb343f7f9d9c07b808aaf833d776d2, ASSIGN in 548 msec 2023-07-14 17:13:36,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:36,631 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:36,635 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=f55ac5dc07100264756a86ce837d79e6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:36,635 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354816635"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354816635"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354816635"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354816635"}]},"ts":"1689354816635"} 2023-07-14 17:13:36,646 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=20 2023-07-14 17:13:36,646 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=20, state=SUCCESS; OpenRegionProcedure f55ac5dc07100264756a86ce837d79e6, server=jenkins-hbase20.apache.org,44093,1689354809062 in 394 msec 2023-07-14 17:13:36,665 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=15 2023-07-14 17:13:36,668 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f55ac5dc07100264756a86ce837d79e6, ASSIGN in 575 msec 2023-07-14 17:13:36,671 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:13:36,671 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354816671"}]},"ts":"1689354816671"} 2023-07-14 17:13:36,673 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-14 17:13:36,676 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:13:36,680 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 1.0870 sec 2023-07-14 17:13:36,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-14 17:13:36,746 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-14 17:13:36,747 DEBUG [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-14 17:13:36,748 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:36,750 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42361] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 148.251.75.209:49540 deadline: 1689354876750, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44093 startCode=1689354809062. As of locationSeqNum=14. 2023-07-14 17:13:36,853 DEBUG [hconnection-0x37307bc-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:13:36,856 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38878, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:13:36,874 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-14 17:13:36,875 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:36,875 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-14 17:13:36,876 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:36,882 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 17:13:36,887 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:51102, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 17:13:36,890 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 17:13:36,896 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:49558, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 17:13:36,897 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 17:13:36,901 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38890, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 17:13:36,903 DEBUG [Listener at localhost.localdomain/41607] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 17:13:36,905 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60782, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 17:13:36,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:36,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:36,919 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:36,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:36,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:36,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:36,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:36,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:36,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:36,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(345): Moving region 96fb343f7f9d9c07b808aaf833d776d2 to RSGroup Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:36,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:36,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:36,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:36,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:13:36,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:36,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=96fb343f7f9d9c07b808aaf833d776d2, REOPEN/MOVE 2023-07-14 17:13:36,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(345): Moving region 42014b8beb42dd7149481be1fd826fb9 to RSGroup Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:36,942 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=96fb343f7f9d9c07b808aaf833d776d2, REOPEN/MOVE 2023-07-14 17:13:36,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:36,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:36,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:36,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:13:36,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:36,944 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=96fb343f7f9d9c07b808aaf833d776d2, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:36,944 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354816944"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354816944"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354816944"}]},"ts":"1689354816944"} 2023-07-14 17:13:36,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=42014b8beb42dd7149481be1fd826fb9, REOPEN/MOVE 2023-07-14 17:13:36,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(345): Moving region 06199b5c9943b1c3c8422ea2364265f0 to RSGroup Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:36,947 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=26, state=RUNNABLE; CloseRegionProcedure 96fb343f7f9d9c07b808aaf833d776d2, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:36,945 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=42014b8beb42dd7149481be1fd826fb9, REOPEN/MOVE 2023-07-14 17:13:36,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:36,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:36,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:36,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:13:36,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:36,953 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=42014b8beb42dd7149481be1fd826fb9, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:36,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=06199b5c9943b1c3c8422ea2364265f0, REOPEN/MOVE 2023-07-14 17:13:36,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(345): Moving region 332da47a9120285a896fd3916c926dae to RSGroup Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:36,953 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354816951"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354816951"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354816951"}]},"ts":"1689354816951"} 2023-07-14 17:13:36,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:36,954 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=06199b5c9943b1c3c8422ea2364265f0, REOPEN/MOVE 2023-07-14 17:13:36,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:36,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:36,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:13:36,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:36,956 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=06199b5c9943b1c3c8422ea2364265f0, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:36,956 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354816956"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354816956"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354816956"}]},"ts":"1689354816956"} 2023-07-14 17:13:36,956 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure 42014b8beb42dd7149481be1fd826fb9, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:36,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=332da47a9120285a896fd3916c926dae, REOPEN/MOVE 2023-07-14 17:13:36,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(345): Moving region f55ac5dc07100264756a86ce837d79e6 to RSGroup Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:36,959 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=332da47a9120285a896fd3916c926dae, REOPEN/MOVE 2023-07-14 17:13:36,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:36,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:36,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:36,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:13:36,960 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=29, state=RUNNABLE; CloseRegionProcedure 06199b5c9943b1c3c8422ea2364265f0, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:36,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:36,962 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=332da47a9120285a896fd3916c926dae, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:36,962 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354815586.332da47a9120285a896fd3916c926dae.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354816962"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354816962"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354816962"}]},"ts":"1689354816962"} 2023-07-14 17:13:36,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f55ac5dc07100264756a86ce837d79e6, REOPEN/MOVE 2023-07-14 17:13:36,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1337281511, current retry=0 2023-07-14 17:13:36,965 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f55ac5dc07100264756a86ce837d79e6, REOPEN/MOVE 2023-07-14 17:13:36,968 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f55ac5dc07100264756a86ce837d79e6, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:36,968 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354816968"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354816968"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354816968"}]},"ts":"1689354816968"} 2023-07-14 17:13:36,972 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure 332da47a9120285a896fd3916c926dae, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:36,974 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=33, state=RUNNABLE; CloseRegionProcedure f55ac5dc07100264756a86ce837d79e6, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:37,019 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-14 17:13:37,020 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-14 17:13:37,020 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 17:13:37,020 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-14 17:13:37,020 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-14 17:13:37,020 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-14 17:13:37,106 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:37,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f55ac5dc07100264756a86ce837d79e6, disabling compactions & flushes 2023-07-14 17:13:37,107 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:37,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:37,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. after waiting 0 ms 2023-07-14 17:13:37,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:37,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:37,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:37,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f55ac5dc07100264756a86ce837d79e6: 2023-07-14 17:13:37,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding f55ac5dc07100264756a86ce837d79e6 move to jenkins-hbase20.apache.org,42361,1689354809221 record at close sequenceid=2 2023-07-14 17:13:37,115 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:37,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 332da47a9120285a896fd3916c926dae, disabling compactions & flushes 2023-07-14 17:13:37,116 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:37,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:37,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. after waiting 0 ms 2023-07-14 17:13:37,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:37,117 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:37,117 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:37,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 06199b5c9943b1c3c8422ea2364265f0, disabling compactions & flushes 2023-07-14 17:13:37,118 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:37,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:37,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. after waiting 0 ms 2023-07-14 17:13:37,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:37,119 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f55ac5dc07100264756a86ce837d79e6, regionState=CLOSED 2023-07-14 17:13:37,119 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354817119"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354817119"}]},"ts":"1689354817119"} 2023-07-14 17:13:37,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:37,126 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=33 2023-07-14 17:13:37,126 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=33, state=SUCCESS; CloseRegionProcedure f55ac5dc07100264756a86ce837d79e6, server=jenkins-hbase20.apache.org,44093,1689354809062 in 148 msec 2023-07-14 17:13:37,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:37,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:37,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 06199b5c9943b1c3c8422ea2364265f0: 2023-07-14 17:13:37,127 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 06199b5c9943b1c3c8422ea2364265f0 move to jenkins-hbase20.apache.org,38517,1689354813230 record at close sequenceid=2 2023-07-14 17:13:37,128 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f55ac5dc07100264756a86ce837d79e6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,42361,1689354809221; forceNewPlan=false, retain=false 2023-07-14 17:13:37,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:37,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 332da47a9120285a896fd3916c926dae: 2023-07-14 17:13:37,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:37,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 332da47a9120285a896fd3916c926dae move to jenkins-hbase20.apache.org,42361,1689354809221 record at close sequenceid=2 2023-07-14 17:13:37,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:37,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 96fb343f7f9d9c07b808aaf833d776d2, disabling compactions & flushes 2023-07-14 17:13:37,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:37,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:37,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. after waiting 0 ms 2023-07-14 17:13:37,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:37,132 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=06199b5c9943b1c3c8422ea2364265f0, regionState=CLOSED 2023-07-14 17:13:37,132 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354817132"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354817132"}]},"ts":"1689354817132"} 2023-07-14 17:13:37,133 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:37,133 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:37,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 42014b8beb42dd7149481be1fd826fb9, disabling compactions & flushes 2023-07-14 17:13:37,135 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:37,135 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:37,135 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. after waiting 0 ms 2023-07-14 17:13:37,135 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:37,135 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=332da47a9120285a896fd3916c926dae, regionState=CLOSED 2023-07-14 17:13:37,135 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354815586.332da47a9120285a896fd3916c926dae.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354817135"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354817135"}]},"ts":"1689354817135"} 2023-07-14 17:13:37,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=29 2023-07-14 17:13:37,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=29, state=SUCCESS; CloseRegionProcedure 06199b5c9943b1c3c8422ea2364265f0, server=jenkins-hbase20.apache.org,44093,1689354809062 in 175 msec 2023-07-14 17:13:37,141 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=06199b5c9943b1c3c8422ea2364265f0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,38517,1689354813230; forceNewPlan=false, retain=false 2023-07-14 17:13:37,142 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-14 17:13:37,142 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure 332da47a9120285a896fd3916c926dae, server=jenkins-hbase20.apache.org,46457,1689354809303 in 166 msec 2023-07-14 17:13:37,143 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=332da47a9120285a896fd3916c926dae, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,42361,1689354809221; forceNewPlan=false, retain=false 2023-07-14 17:13:37,150 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:37,150 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:37,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:37,151 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 96fb343f7f9d9c07b808aaf833d776d2: 2023-07-14 17:13:37,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:37,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 96fb343f7f9d9c07b808aaf833d776d2 move to jenkins-hbase20.apache.org,42361,1689354809221 record at close sequenceid=2 2023-07-14 17:13:37,151 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 42014b8beb42dd7149481be1fd826fb9: 2023-07-14 17:13:37,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 42014b8beb42dd7149481be1fd826fb9 move to jenkins-hbase20.apache.org,38517,1689354813230 record at close sequenceid=2 2023-07-14 17:13:37,161 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:37,163 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=42014b8beb42dd7149481be1fd826fb9, regionState=CLOSED 2023-07-14 17:13:37,163 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354817163"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354817163"}]},"ts":"1689354817163"} 2023-07-14 17:13:37,163 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:37,165 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=96fb343f7f9d9c07b808aaf833d776d2, regionState=CLOSED 2023-07-14 17:13:37,166 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354817165"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354817165"}]},"ts":"1689354817165"} 2023-07-14 17:13:37,171 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-14 17:13:37,171 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure 42014b8beb42dd7149481be1fd826fb9, server=jenkins-hbase20.apache.org,46457,1689354809303 in 210 msec 2023-07-14 17:13:37,172 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=26 2023-07-14 17:13:37,173 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=26, state=SUCCESS; CloseRegionProcedure 96fb343f7f9d9c07b808aaf833d776d2, server=jenkins-hbase20.apache.org,44093,1689354809062 in 221 msec 2023-07-14 17:13:37,173 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=42014b8beb42dd7149481be1fd826fb9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,38517,1689354813230; forceNewPlan=false, retain=false 2023-07-14 17:13:37,174 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=96fb343f7f9d9c07b808aaf833d776d2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,42361,1689354809221; forceNewPlan=false, retain=false 2023-07-14 17:13:37,278 INFO [jenkins-hbase20:41281] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-14 17:13:37,279 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f55ac5dc07100264756a86ce837d79e6, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:37,279 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=332da47a9120285a896fd3916c926dae, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:37,279 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=42014b8beb42dd7149481be1fd826fb9, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:37,279 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354817279"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354817279"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354817279"}]},"ts":"1689354817279"} 2023-07-14 17:13:37,279 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354817279"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354817279"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354817279"}]},"ts":"1689354817279"} 2023-07-14 17:13:37,279 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354815586.332da47a9120285a896fd3916c926dae.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354817279"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354817279"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354817279"}]},"ts":"1689354817279"} 2023-07-14 17:13:37,279 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=96fb343f7f9d9c07b808aaf833d776d2, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:37,279 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354817279"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354817279"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354817279"}]},"ts":"1689354817279"} 2023-07-14 17:13:37,279 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=06199b5c9943b1c3c8422ea2364265f0, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:37,280 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354817279"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354817279"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354817279"}]},"ts":"1689354817279"} 2023-07-14 17:13:37,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=27, state=RUNNABLE; OpenRegionProcedure 42014b8beb42dd7149481be1fd826fb9, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:37,284 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=33, state=RUNNABLE; OpenRegionProcedure f55ac5dc07100264756a86ce837d79e6, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:37,285 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=30, state=RUNNABLE; OpenRegionProcedure 332da47a9120285a896fd3916c926dae, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:37,287 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=26, state=RUNNABLE; OpenRegionProcedure 96fb343f7f9d9c07b808aaf833d776d2, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:37,288 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=29, state=RUNNABLE; OpenRegionProcedure 06199b5c9943b1c3c8422ea2364265f0, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:37,435 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:37,436 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 17:13:37,437 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:51106, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 17:13:37,445 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:37,445 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 42014b8beb42dd7149481be1fd826fb9, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-14 17:13:37,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:37,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:37,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:37,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:37,451 INFO [StoreOpener-42014b8beb42dd7149481be1fd826fb9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:37,451 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:37,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 332da47a9120285a896fd3916c926dae, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-14 17:13:37,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:37,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:37,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:37,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:37,454 DEBUG [StoreOpener-42014b8beb42dd7149481be1fd826fb9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9/f 2023-07-14 17:13:37,454 DEBUG [StoreOpener-42014b8beb42dd7149481be1fd826fb9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9/f 2023-07-14 17:13:37,455 INFO [StoreOpener-42014b8beb42dd7149481be1fd826fb9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 42014b8beb42dd7149481be1fd826fb9 columnFamilyName f 2023-07-14 17:13:37,455 INFO [StoreOpener-332da47a9120285a896fd3916c926dae-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:37,456 INFO [StoreOpener-42014b8beb42dd7149481be1fd826fb9-1] regionserver.HStore(310): Store=42014b8beb42dd7149481be1fd826fb9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:37,458 DEBUG [StoreOpener-332da47a9120285a896fd3916c926dae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae/f 2023-07-14 17:13:37,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:37,458 DEBUG [StoreOpener-332da47a9120285a896fd3916c926dae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae/f 2023-07-14 17:13:37,459 INFO [StoreOpener-332da47a9120285a896fd3916c926dae-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 332da47a9120285a896fd3916c926dae columnFamilyName f 2023-07-14 17:13:37,460 INFO [StoreOpener-332da47a9120285a896fd3916c926dae-1] regionserver.HStore(310): Store=332da47a9120285a896fd3916c926dae/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:37,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:37,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae 2023-07-14 17:13:37,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae 2023-07-14 17:13:37,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:37,473 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 42014b8beb42dd7149481be1fd826fb9; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9962237600, jitterRate=-0.07219432294368744}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:37,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 42014b8beb42dd7149481be1fd826fb9: 2023-07-14 17:13:37,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:37,474 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9., pid=36, masterSystemTime=1689354817435 2023-07-14 17:13:37,477 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 332da47a9120285a896fd3916c926dae; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11983163840, jitterRate=0.1160190999507904}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:37,477 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 332da47a9120285a896fd3916c926dae: 2023-07-14 17:13:37,478 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae., pid=38, masterSystemTime=1689354817439 2023-07-14 17:13:37,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:37,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:37,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:37,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 06199b5c9943b1c3c8422ea2364265f0, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-14 17:13:37,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:37,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:37,480 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=42014b8beb42dd7149481be1fd826fb9, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:37,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:37,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:37,481 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354817480"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354817480"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354817480"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354817480"}]},"ts":"1689354817480"} 2023-07-14 17:13:37,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:37,481 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:37,481 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:37,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 96fb343f7f9d9c07b808aaf833d776d2, NAME => 'Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-14 17:13:37,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:37,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:37,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:37,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:37,482 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=332da47a9120285a896fd3916c926dae, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:37,483 INFO [StoreOpener-06199b5c9943b1c3c8422ea2364265f0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:37,484 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354815586.332da47a9120285a896fd3916c926dae.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354817482"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354817482"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354817482"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354817482"}]},"ts":"1689354817482"} 2023-07-14 17:13:37,485 INFO [StoreOpener-96fb343f7f9d9c07b808aaf833d776d2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:37,485 DEBUG [StoreOpener-06199b5c9943b1c3c8422ea2364265f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0/f 2023-07-14 17:13:37,488 DEBUG [StoreOpener-06199b5c9943b1c3c8422ea2364265f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0/f 2023-07-14 17:13:37,489 DEBUG [StoreOpener-96fb343f7f9d9c07b808aaf833d776d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2/f 2023-07-14 17:13:37,489 DEBUG [StoreOpener-96fb343f7f9d9c07b808aaf833d776d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2/f 2023-07-14 17:13:37,489 INFO [StoreOpener-06199b5c9943b1c3c8422ea2364265f0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 06199b5c9943b1c3c8422ea2364265f0 columnFamilyName f 2023-07-14 17:13:37,490 INFO [StoreOpener-06199b5c9943b1c3c8422ea2364265f0-1] regionserver.HStore(310): Store=06199b5c9943b1c3c8422ea2364265f0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:37,490 INFO [StoreOpener-96fb343f7f9d9c07b808aaf833d776d2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 96fb343f7f9d9c07b808aaf833d776d2 columnFamilyName f 2023-07-14 17:13:37,492 INFO [StoreOpener-96fb343f7f9d9c07b808aaf833d776d2-1] regionserver.HStore(310): Store=96fb343f7f9d9c07b808aaf833d776d2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:37,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:37,499 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=27 2023-07-14 17:13:37,499 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=27, state=SUCCESS; OpenRegionProcedure 42014b8beb42dd7149481be1fd826fb9, server=jenkins-hbase20.apache.org,38517,1689354813230 in 203 msec 2023-07-14 17:13:37,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:37,501 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=30 2023-07-14 17:13:37,501 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=30, state=SUCCESS; OpenRegionProcedure 332da47a9120285a896fd3916c926dae, server=jenkins-hbase20.apache.org,42361,1689354809221 in 204 msec 2023-07-14 17:13:37,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:37,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:37,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:37,508 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=42014b8beb42dd7149481be1fd826fb9, REOPEN/MOVE in 556 msec 2023-07-14 17:13:37,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:37,510 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=332da47a9120285a896fd3916c926dae, REOPEN/MOVE in 546 msec 2023-07-14 17:13:37,511 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 96fb343f7f9d9c07b808aaf833d776d2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10279014240, jitterRate=-0.04269219934940338}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:37,511 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 96fb343f7f9d9c07b808aaf833d776d2: 2023-07-14 17:13:37,511 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 06199b5c9943b1c3c8422ea2364265f0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9526916640, jitterRate=-0.11273674666881561}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:37,511 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 06199b5c9943b1c3c8422ea2364265f0: 2023-07-14 17:13:37,512 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2., pid=39, masterSystemTime=1689354817439 2023-07-14 17:13:37,512 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0., pid=40, masterSystemTime=1689354817435 2023-07-14 17:13:37,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:37,515 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:37,515 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:37,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f55ac5dc07100264756a86ce837d79e6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-14 17:13:37,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:37,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:37,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:37,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:37,517 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=96fb343f7f9d9c07b808aaf833d776d2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:37,517 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354817516"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354817516"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354817516"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354817516"}]},"ts":"1689354817516"} 2023-07-14 17:13:37,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:37,518 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:37,518 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=06199b5c9943b1c3c8422ea2364265f0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:37,519 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354817518"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354817518"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354817518"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354817518"}]},"ts":"1689354817518"} 2023-07-14 17:13:37,519 INFO [StoreOpener-f55ac5dc07100264756a86ce837d79e6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:37,522 DEBUG [StoreOpener-f55ac5dc07100264756a86ce837d79e6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6/f 2023-07-14 17:13:37,522 DEBUG [StoreOpener-f55ac5dc07100264756a86ce837d79e6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6/f 2023-07-14 17:13:37,523 INFO [StoreOpener-f55ac5dc07100264756a86ce837d79e6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f55ac5dc07100264756a86ce837d79e6 columnFamilyName f 2023-07-14 17:13:37,525 INFO [StoreOpener-f55ac5dc07100264756a86ce837d79e6-1] regionserver.HStore(310): Store=f55ac5dc07100264756a86ce837d79e6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:37,525 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=26 2023-07-14 17:13:37,525 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=26, state=SUCCESS; OpenRegionProcedure 96fb343f7f9d9c07b808aaf833d776d2, server=jenkins-hbase20.apache.org,42361,1689354809221 in 233 msec 2023-07-14 17:13:37,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:37,527 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=29 2023-07-14 17:13:37,527 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=29, state=SUCCESS; OpenRegionProcedure 06199b5c9943b1c3c8422ea2364265f0, server=jenkins-hbase20.apache.org,38517,1689354813230 in 235 msec 2023-07-14 17:13:37,527 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=96fb343f7f9d9c07b808aaf833d776d2, REOPEN/MOVE in 586 msec 2023-07-14 17:13:37,529 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=06199b5c9943b1c3c8422ea2364265f0, REOPEN/MOVE in 577 msec 2023-07-14 17:13:37,531 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:37,536 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:37,537 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f55ac5dc07100264756a86ce837d79e6; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11963896160, jitterRate=0.11422465741634369}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:37,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f55ac5dc07100264756a86ce837d79e6: 2023-07-14 17:13:37,538 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6., pid=37, masterSystemTime=1689354817439 2023-07-14 17:13:37,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:37,540 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:37,541 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f55ac5dc07100264756a86ce837d79e6, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:37,541 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354817541"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354817541"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354817541"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354817541"}]},"ts":"1689354817541"} 2023-07-14 17:13:37,549 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=33 2023-07-14 17:13:37,549 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=33, state=SUCCESS; OpenRegionProcedure f55ac5dc07100264756a86ce837d79e6, server=jenkins-hbase20.apache.org,42361,1689354809221 in 261 msec 2023-07-14 17:13:37,552 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f55ac5dc07100264756a86ce837d79e6, REOPEN/MOVE in 589 msec 2023-07-14 17:13:37,880 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-14 17:13:37,945 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-14 17:13:37,946 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-14 17:13:37,947 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-14 17:13:37,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-14 17:13:37,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1337281511. 2023-07-14 17:13:37,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:37,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:37,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:37,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:37,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:37,972 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:37,978 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:37,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:37,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:37,994 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354817994"}]},"ts":"1689354817994"} 2023-07-14 17:13:37,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-14 17:13:37,996 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-14 17:13:37,997 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-14 17:13:37,999 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=96fb343f7f9d9c07b808aaf833d776d2, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=42014b8beb42dd7149481be1fd826fb9, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=06199b5c9943b1c3c8422ea2364265f0, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=332da47a9120285a896fd3916c926dae, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f55ac5dc07100264756a86ce837d79e6, UNASSIGN}] 2023-07-14 17:13:38,002 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=42014b8beb42dd7149481be1fd826fb9, UNASSIGN 2023-07-14 17:13:38,002 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=96fb343f7f9d9c07b808aaf833d776d2, UNASSIGN 2023-07-14 17:13:38,002 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=06199b5c9943b1c3c8422ea2364265f0, UNASSIGN 2023-07-14 17:13:38,002 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=332da47a9120285a896fd3916c926dae, UNASSIGN 2023-07-14 17:13:38,003 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f55ac5dc07100264756a86ce837d79e6, UNASSIGN 2023-07-14 17:13:38,003 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=332da47a9120285a896fd3916c926dae, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:38,003 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=96fb343f7f9d9c07b808aaf833d776d2, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:38,003 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=42014b8beb42dd7149481be1fd826fb9, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:38,005 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=f55ac5dc07100264756a86ce837d79e6, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:38,005 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354818003"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354818003"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354818003"}]},"ts":"1689354818003"} 2023-07-14 17:13:38,005 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354818003"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354818003"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354818003"}]},"ts":"1689354818003"} 2023-07-14 17:13:38,005 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354815586.332da47a9120285a896fd3916c926dae.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354818003"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354818003"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354818003"}]},"ts":"1689354818003"} 2023-07-14 17:13:38,005 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=06199b5c9943b1c3c8422ea2364265f0, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:38,005 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354818005"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354818005"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354818005"}]},"ts":"1689354818005"} 2023-07-14 17:13:38,005 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354818005"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354818005"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354818005"}]},"ts":"1689354818005"} 2023-07-14 17:13:38,007 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=42, state=RUNNABLE; CloseRegionProcedure 96fb343f7f9d9c07b808aaf833d776d2, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:38,008 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=43, state=RUNNABLE; CloseRegionProcedure 42014b8beb42dd7149481be1fd826fb9, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:38,009 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=45, state=RUNNABLE; CloseRegionProcedure 332da47a9120285a896fd3916c926dae, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:38,011 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=46, state=RUNNABLE; CloseRegionProcedure f55ac5dc07100264756a86ce837d79e6, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:38,012 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=44, state=RUNNABLE; CloseRegionProcedure 06199b5c9943b1c3c8422ea2364265f0, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:38,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-14 17:13:38,160 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:38,162 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 96fb343f7f9d9c07b808aaf833d776d2, disabling compactions & flushes 2023-07-14 17:13:38,162 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:38,162 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:38,162 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. after waiting 0 ms 2023-07-14 17:13:38,162 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:38,167 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:38,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 42014b8beb42dd7149481be1fd826fb9, disabling compactions & flushes 2023-07-14 17:13:38,168 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:38,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:38,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. after waiting 0 ms 2023-07-14 17:13:38,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:38,174 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 17:13:38,176 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2. 2023-07-14 17:13:38,177 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 96fb343f7f9d9c07b808aaf833d776d2: 2023-07-14 17:13:38,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:38,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:38,182 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=96fb343f7f9d9c07b808aaf833d776d2, regionState=CLOSED 2023-07-14 17:13:38,182 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354818182"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354818182"}]},"ts":"1689354818182"} 2023-07-14 17:13:38,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 332da47a9120285a896fd3916c926dae, disabling compactions & flushes 2023-07-14 17:13:38,184 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:38,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:38,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. after waiting 0 ms 2023-07-14 17:13:38,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:38,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 17:13:38,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9. 2023-07-14 17:13:38,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 42014b8beb42dd7149481be1fd826fb9: 2023-07-14 17:13:38,191 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:38,191 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:38,191 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=42 2023-07-14 17:13:38,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 06199b5c9943b1c3c8422ea2364265f0, disabling compactions & flushes 2023-07-14 17:13:38,192 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=42, state=SUCCESS; CloseRegionProcedure 96fb343f7f9d9c07b808aaf833d776d2, server=jenkins-hbase20.apache.org,42361,1689354809221 in 179 msec 2023-07-14 17:13:38,192 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:38,193 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:38,193 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. after waiting 0 ms 2023-07-14 17:13:38,193 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:38,193 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=42014b8beb42dd7149481be1fd826fb9, regionState=CLOSED 2023-07-14 17:13:38,193 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354818193"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354818193"}]},"ts":"1689354818193"} 2023-07-14 17:13:38,194 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=96fb343f7f9d9c07b808aaf833d776d2, UNASSIGN in 192 msec 2023-07-14 17:13:38,197 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 17:13:38,204 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae. 2023-07-14 17:13:38,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 332da47a9120285a896fd3916c926dae: 2023-07-14 17:13:38,208 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 332da47a9120285a896fd3916c926dae 2023-07-14 17:13:38,209 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:38,209 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=332da47a9120285a896fd3916c926dae, regionState=CLOSED 2023-07-14 17:13:38,209 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354815586.332da47a9120285a896fd3916c926dae.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354818209"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354818209"}]},"ts":"1689354818209"} 2023-07-14 17:13:38,209 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=43 2023-07-14 17:13:38,210 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=43, state=SUCCESS; CloseRegionProcedure 42014b8beb42dd7149481be1fd826fb9, server=jenkins-hbase20.apache.org,38517,1689354813230 in 194 msec 2023-07-14 17:13:38,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f55ac5dc07100264756a86ce837d79e6, disabling compactions & flushes 2023-07-14 17:13:38,214 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:38,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:38,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. after waiting 0 ms 2023-07-14 17:13:38,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:38,216 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=42014b8beb42dd7149481be1fd826fb9, UNASSIGN in 210 msec 2023-07-14 17:13:38,219 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 17:13:38,223 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 17:13:38,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0. 2023-07-14 17:13:38,225 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 06199b5c9943b1c3c8422ea2364265f0: 2023-07-14 17:13:38,227 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6. 2023-07-14 17:13:38,227 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f55ac5dc07100264756a86ce837d79e6: 2023-07-14 17:13:38,235 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=45 2023-07-14 17:13:38,235 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=06199b5c9943b1c3c8422ea2364265f0, regionState=CLOSED 2023-07-14 17:13:38,235 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=45, state=SUCCESS; CloseRegionProcedure 332da47a9120285a896fd3916c926dae, server=jenkins-hbase20.apache.org,42361,1689354809221 in 211 msec 2023-07-14 17:13:38,235 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354818235"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354818235"}]},"ts":"1689354818235"} 2023-07-14 17:13:38,236 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=f55ac5dc07100264756a86ce837d79e6, regionState=CLOSED 2023-07-14 17:13:38,236 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354818236"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354818236"}]},"ts":"1689354818236"} 2023-07-14 17:13:38,236 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:38,236 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:38,237 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=332da47a9120285a896fd3916c926dae, UNASSIGN in 236 msec 2023-07-14 17:13:38,242 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=44 2023-07-14 17:13:38,242 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=44, state=SUCCESS; CloseRegionProcedure 06199b5c9943b1c3c8422ea2364265f0, server=jenkins-hbase20.apache.org,38517,1689354813230 in 225 msec 2023-07-14 17:13:38,247 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=46 2023-07-14 17:13:38,247 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=46, state=SUCCESS; CloseRegionProcedure f55ac5dc07100264756a86ce837d79e6, server=jenkins-hbase20.apache.org,42361,1689354809221 in 227 msec 2023-07-14 17:13:38,249 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=06199b5c9943b1c3c8422ea2364265f0, UNASSIGN in 243 msec 2023-07-14 17:13:38,250 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=41 2023-07-14 17:13:38,250 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f55ac5dc07100264756a86ce837d79e6, UNASSIGN in 248 msec 2023-07-14 17:13:38,252 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354818251"}]},"ts":"1689354818251"} 2023-07-14 17:13:38,253 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-14 17:13:38,255 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-14 17:13:38,261 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 275 msec 2023-07-14 17:13:38,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-14 17:13:38,299 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-14 17:13:38,301 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:38,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$6(2260): Client=jenkins//148.251.75.209 truncate Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:38,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-14 17:13:38,320 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-14 17:13:38,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-14 17:13:38,339 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:38,339 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:38,339 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae 2023-07-14 17:13:38,339 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:38,339 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:38,344 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae/recovered.edits] 2023-07-14 17:13:38,344 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6/recovered.edits] 2023-07-14 17:13:38,347 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9/recovered.edits] 2023-07-14 17:13:38,348 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0/recovered.edits] 2023-07-14 17:13:38,348 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2/recovered.edits] 2023-07-14 17:13:38,355 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6/recovered.edits/7.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6/recovered.edits/7.seqid 2023-07-14 17:13:38,355 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae/recovered.edits/7.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae/recovered.edits/7.seqid 2023-07-14 17:13:38,358 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9/recovered.edits/7.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9/recovered.edits/7.seqid 2023-07-14 17:13:38,358 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f55ac5dc07100264756a86ce837d79e6 2023-07-14 17:13:38,359 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/332da47a9120285a896fd3916c926dae 2023-07-14 17:13:38,359 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/42014b8beb42dd7149481be1fd826fb9 2023-07-14 17:13:38,360 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2/recovered.edits/7.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2/recovered.edits/7.seqid 2023-07-14 17:13:38,361 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0/recovered.edits/7.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0/recovered.edits/7.seqid 2023-07-14 17:13:38,361 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/96fb343f7f9d9c07b808aaf833d776d2 2023-07-14 17:13:38,361 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/06199b5c9943b1c3c8422ea2364265f0 2023-07-14 17:13:38,361 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-14 17:13:38,388 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-14 17:13:38,392 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-14 17:13:38,393 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-14 17:13:38,393 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354818393"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:38,393 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354818393"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:38,393 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354818393"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:38,393 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354815586.332da47a9120285a896fd3916c926dae.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354818393"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:38,394 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354818393"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:38,397 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-14 17:13:38,397 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 96fb343f7f9d9c07b808aaf833d776d2, NAME => 'Group_testTableMoveTruncateAndDrop,,1689354815586.96fb343f7f9d9c07b808aaf833d776d2.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 42014b8beb42dd7149481be1fd826fb9, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689354815586.42014b8beb42dd7149481be1fd826fb9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 06199b5c9943b1c3c8422ea2364265f0, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354815586.06199b5c9943b1c3c8422ea2364265f0.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 332da47a9120285a896fd3916c926dae, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354815586.332da47a9120285a896fd3916c926dae.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => f55ac5dc07100264756a86ce837d79e6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689354815586.f55ac5dc07100264756a86ce837d79e6.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-14 17:13:38,397 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-14 17:13:38,397 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689354818397"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:38,400 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-14 17:13:38,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-14 17:13:38,513 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca 2023-07-14 17:13:38,513 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5 2023-07-14 17:13:38,513 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609 2023-07-14 17:13:38,513 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684 2023-07-14 17:13:38,513 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069 2023-07-14 17:13:38,515 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684 empty. 2023-07-14 17:13:38,515 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069 empty. 2023-07-14 17:13:38,515 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609 empty. 2023-07-14 17:13:38,515 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca empty. 2023-07-14 17:13:38,515 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5 empty. 2023-07-14 17:13:38,515 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca 2023-07-14 17:13:38,515 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609 2023-07-14 17:13:38,516 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069 2023-07-14 17:13:38,516 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684 2023-07-14 17:13:38,516 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5 2023-07-14 17:13:38,516 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-14 17:13:38,542 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-14 17:13:38,544 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 892020cab5424ab6b4c288732ecbfdca, NAME => 'Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:38,545 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 7717df2a311e88ba87e9fd764d6ac069, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:38,547 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 8beec090c25553a91db6d6daf8984684, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:38,586 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:38,586 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 892020cab5424ab6b4c288732ecbfdca, disabling compactions & flushes 2023-07-14 17:13:38,586 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca. 2023-07-14 17:13:38,586 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca. 2023-07-14 17:13:38,586 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca. after waiting 0 ms 2023-07-14 17:13:38,586 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca. 2023-07-14 17:13:38,586 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca. 2023-07-14 17:13:38,586 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 892020cab5424ab6b4c288732ecbfdca: 2023-07-14 17:13:38,587 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 87fa5194bfbd03d36825de5de388b609, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:38,589 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:38,589 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 8beec090c25553a91db6d6daf8984684, disabling compactions & flushes 2023-07-14 17:13:38,589 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684. 2023-07-14 17:13:38,589 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:38,589 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684. 2023-07-14 17:13:38,590 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 7717df2a311e88ba87e9fd764d6ac069, disabling compactions & flushes 2023-07-14 17:13:38,590 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684. after waiting 0 ms 2023-07-14 17:13:38,590 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069. 2023-07-14 17:13:38,590 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684. 2023-07-14 17:13:38,590 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069. 2023-07-14 17:13:38,590 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684. 2023-07-14 17:13:38,590 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069. after waiting 0 ms 2023-07-14 17:13:38,590 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 8beec090c25553a91db6d6daf8984684: 2023-07-14 17:13:38,590 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069. 2023-07-14 17:13:38,590 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069. 2023-07-14 17:13:38,590 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 7717df2a311e88ba87e9fd764d6ac069: 2023-07-14 17:13:38,590 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => ece056dcf807a14999e5af64c21c3ff5, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:38,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-14 17:13:38,633 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:38,634 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing ece056dcf807a14999e5af64c21c3ff5, disabling compactions & flushes 2023-07-14 17:13:38,634 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5. 2023-07-14 17:13:38,634 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5. 2023-07-14 17:13:38,634 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5. after waiting 0 ms 2023-07-14 17:13:38,634 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5. 2023-07-14 17:13:38,634 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5. 2023-07-14 17:13:38,634 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for ece056dcf807a14999e5af64c21c3ff5: 2023-07-14 17:13:38,638 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:38,638 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 87fa5194bfbd03d36825de5de388b609, disabling compactions & flushes 2023-07-14 17:13:38,638 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609. 2023-07-14 17:13:38,638 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609. 2023-07-14 17:13:38,638 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609. after waiting 0 ms 2023-07-14 17:13:38,638 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609. 2023-07-14 17:13:38,638 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609. 2023-07-14 17:13:38,638 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 87fa5194bfbd03d36825de5de388b609: 2023-07-14 17:13:38,644 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354818644"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354818644"}]},"ts":"1689354818644"} 2023-07-14 17:13:38,644 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354818363.8beec090c25553a91db6d6daf8984684.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354818644"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354818644"}]},"ts":"1689354818644"} 2023-07-14 17:13:38,644 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354818644"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354818644"}]},"ts":"1689354818644"} 2023-07-14 17:13:38,644 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354818644"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354818644"}]},"ts":"1689354818644"} 2023-07-14 17:13:38,645 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354818644"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354818644"}]},"ts":"1689354818644"} 2023-07-14 17:13:38,650 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-14 17:13:38,651 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354818651"}]},"ts":"1689354818651"} 2023-07-14 17:13:38,653 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-14 17:13:38,659 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:38,659 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:38,660 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:38,660 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:38,664 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=892020cab5424ab6b4c288732ecbfdca, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7717df2a311e88ba87e9fd764d6ac069, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8beec090c25553a91db6d6daf8984684, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=87fa5194bfbd03d36825de5de388b609, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ece056dcf807a14999e5af64c21c3ff5, ASSIGN}] 2023-07-14 17:13:38,667 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ece056dcf807a14999e5af64c21c3ff5, ASSIGN 2023-07-14 17:13:38,671 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ece056dcf807a14999e5af64c21c3ff5, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38517,1689354813230; forceNewPlan=false, retain=false 2023-07-14 17:13:38,673 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7717df2a311e88ba87e9fd764d6ac069, ASSIGN 2023-07-14 17:13:38,674 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=892020cab5424ab6b4c288732ecbfdca, ASSIGN 2023-07-14 17:13:38,674 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=87fa5194bfbd03d36825de5de388b609, ASSIGN 2023-07-14 17:13:38,674 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8beec090c25553a91db6d6daf8984684, ASSIGN 2023-07-14 17:13:38,676 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7717df2a311e88ba87e9fd764d6ac069, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,42361,1689354809221; forceNewPlan=false, retain=false 2023-07-14 17:13:38,677 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=892020cab5424ab6b4c288732ecbfdca, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,42361,1689354809221; forceNewPlan=false, retain=false 2023-07-14 17:13:38,677 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=87fa5194bfbd03d36825de5de388b609, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,42361,1689354809221; forceNewPlan=false, retain=false 2023-07-14 17:13:38,677 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8beec090c25553a91db6d6daf8984684, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38517,1689354813230; forceNewPlan=false, retain=false 2023-07-14 17:13:38,822 INFO [jenkins-hbase20:41281] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-14 17:13:38,828 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=87fa5194bfbd03d36825de5de388b609, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:38,828 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=892020cab5424ab6b4c288732ecbfdca, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:38,828 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=ece056dcf807a14999e5af64c21c3ff5, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:38,828 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=7717df2a311e88ba87e9fd764d6ac069, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:38,829 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354818828"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354818828"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354818828"}]},"ts":"1689354818828"} 2023-07-14 17:13:38,829 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354818828"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354818828"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354818828"}]},"ts":"1689354818828"} 2023-07-14 17:13:38,829 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354818828"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354818828"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354818828"}]},"ts":"1689354818828"} 2023-07-14 17:13:38,829 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354818828"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354818828"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354818828"}]},"ts":"1689354818828"} 2023-07-14 17:13:38,829 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=8beec090c25553a91db6d6daf8984684, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:38,830 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354818363.8beec090c25553a91db6d6daf8984684.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354818829"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354818829"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354818829"}]},"ts":"1689354818829"} 2023-07-14 17:13:38,832 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=57, state=RUNNABLE; OpenRegionProcedure ece056dcf807a14999e5af64c21c3ff5, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:38,833 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=53, state=RUNNABLE; OpenRegionProcedure 892020cab5424ab6b4c288732ecbfdca, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:38,835 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=54, state=RUNNABLE; OpenRegionProcedure 7717df2a311e88ba87e9fd764d6ac069, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:38,837 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=56, state=RUNNABLE; OpenRegionProcedure 87fa5194bfbd03d36825de5de388b609, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:38,839 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=55, state=RUNNABLE; OpenRegionProcedure 8beec090c25553a91db6d6daf8984684, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:38,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-14 17:13:38,989 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684. 2023-07-14 17:13:38,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8beec090c25553a91db6d6daf8984684, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-14 17:13:38,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8beec090c25553a91db6d6daf8984684 2023-07-14 17:13:38,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:38,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 8beec090c25553a91db6d6daf8984684 2023-07-14 17:13:38,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 8beec090c25553a91db6d6daf8984684 2023-07-14 17:13:38,990 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609. 2023-07-14 17:13:38,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 87fa5194bfbd03d36825de5de388b609, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-14 17:13:38,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 87fa5194bfbd03d36825de5de388b609 2023-07-14 17:13:38,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:38,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 87fa5194bfbd03d36825de5de388b609 2023-07-14 17:13:38,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 87fa5194bfbd03d36825de5de388b609 2023-07-14 17:13:38,992 INFO [StoreOpener-8beec090c25553a91db6d6daf8984684-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8beec090c25553a91db6d6daf8984684 2023-07-14 17:13:38,993 DEBUG [StoreOpener-8beec090c25553a91db6d6daf8984684-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684/f 2023-07-14 17:13:38,993 DEBUG [StoreOpener-8beec090c25553a91db6d6daf8984684-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684/f 2023-07-14 17:13:38,994 INFO [StoreOpener-8beec090c25553a91db6d6daf8984684-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8beec090c25553a91db6d6daf8984684 columnFamilyName f 2023-07-14 17:13:38,995 INFO [StoreOpener-8beec090c25553a91db6d6daf8984684-1] regionserver.HStore(310): Store=8beec090c25553a91db6d6daf8984684/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:38,995 INFO [StoreOpener-87fa5194bfbd03d36825de5de388b609-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 87fa5194bfbd03d36825de5de388b609 2023-07-14 17:13:38,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684 2023-07-14 17:13:38,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684 2023-07-14 17:13:38,997 DEBUG [StoreOpener-87fa5194bfbd03d36825de5de388b609-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609/f 2023-07-14 17:13:38,997 DEBUG [StoreOpener-87fa5194bfbd03d36825de5de388b609-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609/f 2023-07-14 17:13:38,997 INFO [StoreOpener-87fa5194bfbd03d36825de5de388b609-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 87fa5194bfbd03d36825de5de388b609 columnFamilyName f 2023-07-14 17:13:38,998 INFO [StoreOpener-87fa5194bfbd03d36825de5de388b609-1] regionserver.HStore(310): Store=87fa5194bfbd03d36825de5de388b609/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:39,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 8beec090c25553a91db6d6daf8984684 2023-07-14 17:13:39,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609 2023-07-14 17:13:39,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:39,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609 2023-07-14 17:13:39,011 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 8beec090c25553a91db6d6daf8984684; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10884648320, jitterRate=0.013711869716644287}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:39,011 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 8beec090c25553a91db6d6daf8984684: 2023-07-14 17:13:39,012 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684., pid=62, masterSystemTime=1689354818985 2023-07-14 17:13:39,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 87fa5194bfbd03d36825de5de388b609 2023-07-14 17:13:39,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684. 2023-07-14 17:13:39,016 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684. 2023-07-14 17:13:39,016 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5. 2023-07-14 17:13:39,016 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=8beec090c25553a91db6d6daf8984684, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:39,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ece056dcf807a14999e5af64c21c3ff5, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-14 17:13:39,016 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354818363.8beec090c25553a91db6d6daf8984684.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354819016"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354819016"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354819016"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354819016"}]},"ts":"1689354819016"} 2023-07-14 17:13:39,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ece056dcf807a14999e5af64c21c3ff5 2023-07-14 17:13:39,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:39,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for ece056dcf807a14999e5af64c21c3ff5 2023-07-14 17:13:39,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for ece056dcf807a14999e5af64c21c3ff5 2023-07-14 17:13:39,022 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=55 2023-07-14 17:13:39,022 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=55, state=SUCCESS; OpenRegionProcedure 8beec090c25553a91db6d6daf8984684, server=jenkins-hbase20.apache.org,38517,1689354813230 in 180 msec 2023-07-14 17:13:39,025 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8beec090c25553a91db6d6daf8984684, ASSIGN in 359 msec 2023-07-14 17:13:39,027 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:39,028 INFO [StoreOpener-ece056dcf807a14999e5af64c21c3ff5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ece056dcf807a14999e5af64c21c3ff5 2023-07-14 17:13:39,028 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 87fa5194bfbd03d36825de5de388b609; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11991535840, jitterRate=0.11679880321025848}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:39,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 87fa5194bfbd03d36825de5de388b609: 2023-07-14 17:13:39,029 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609., pid=61, masterSystemTime=1689354818986 2023-07-14 17:13:39,029 DEBUG [StoreOpener-ece056dcf807a14999e5af64c21c3ff5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5/f 2023-07-14 17:13:39,030 DEBUG [StoreOpener-ece056dcf807a14999e5af64c21c3ff5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5/f 2023-07-14 17:13:39,030 INFO [StoreOpener-ece056dcf807a14999e5af64c21c3ff5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ece056dcf807a14999e5af64c21c3ff5 columnFamilyName f 2023-07-14 17:13:39,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609. 2023-07-14 17:13:39,031 INFO [StoreOpener-ece056dcf807a14999e5af64c21c3ff5-1] regionserver.HStore(310): Store=ece056dcf807a14999e5af64c21c3ff5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:39,031 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609. 2023-07-14 17:13:39,031 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069. 2023-07-14 17:13:39,031 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=87fa5194bfbd03d36825de5de388b609, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:39,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7717df2a311e88ba87e9fd764d6ac069, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-14 17:13:39,032 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354819031"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354819031"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354819031"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354819031"}]},"ts":"1689354819031"} 2023-07-14 17:13:39,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7717df2a311e88ba87e9fd764d6ac069 2023-07-14 17:13:39,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:39,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 7717df2a311e88ba87e9fd764d6ac069 2023-07-14 17:13:39,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 7717df2a311e88ba87e9fd764d6ac069 2023-07-14 17:13:39,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5 2023-07-14 17:13:39,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5 2023-07-14 17:13:39,034 INFO [StoreOpener-7717df2a311e88ba87e9fd764d6ac069-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7717df2a311e88ba87e9fd764d6ac069 2023-07-14 17:13:39,037 DEBUG [StoreOpener-7717df2a311e88ba87e9fd764d6ac069-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069/f 2023-07-14 17:13:39,038 DEBUG [StoreOpener-7717df2a311e88ba87e9fd764d6ac069-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069/f 2023-07-14 17:13:39,039 INFO [StoreOpener-7717df2a311e88ba87e9fd764d6ac069-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7717df2a311e88ba87e9fd764d6ac069 columnFamilyName f 2023-07-14 17:13:39,040 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=56 2023-07-14 17:13:39,040 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=56, state=SUCCESS; OpenRegionProcedure 87fa5194bfbd03d36825de5de388b609, server=jenkins-hbase20.apache.org,42361,1689354809221 in 197 msec 2023-07-14 17:13:39,040 INFO [StoreOpener-7717df2a311e88ba87e9fd764d6ac069-1] regionserver.HStore(310): Store=7717df2a311e88ba87e9fd764d6ac069/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:39,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for ece056dcf807a14999e5af64c21c3ff5 2023-07-14 17:13:39,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069 2023-07-14 17:13:39,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069 2023-07-14 17:13:39,042 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=87fa5194bfbd03d36825de5de388b609, ASSIGN in 377 msec 2023-07-14 17:13:39,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:39,044 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened ece056dcf807a14999e5af64c21c3ff5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11419495680, jitterRate=0.06352341175079346}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:39,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for ece056dcf807a14999e5af64c21c3ff5: 2023-07-14 17:13:39,045 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5., pid=58, masterSystemTime=1689354818985 2023-07-14 17:13:39,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 7717df2a311e88ba87e9fd764d6ac069 2023-07-14 17:13:39,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5. 2023-07-14 17:13:39,048 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5. 2023-07-14 17:13:39,049 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=ece056dcf807a14999e5af64c21c3ff5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:39,049 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354819049"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354819049"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354819049"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354819049"}]},"ts":"1689354819049"} 2023-07-14 17:13:39,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:39,050 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 7717df2a311e88ba87e9fd764d6ac069; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10801789280, jitterRate=0.005995020270347595}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:39,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 7717df2a311e88ba87e9fd764d6ac069: 2023-07-14 17:13:39,051 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069., pid=60, masterSystemTime=1689354818986 2023-07-14 17:13:39,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069. 2023-07-14 17:13:39,053 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069. 2023-07-14 17:13:39,053 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca. 2023-07-14 17:13:39,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 892020cab5424ab6b4c288732ecbfdca, NAME => 'Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-14 17:13:39,054 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 892020cab5424ab6b4c288732ecbfdca 2023-07-14 17:13:39,054 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:39,054 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=7717df2a311e88ba87e9fd764d6ac069, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:39,054 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 892020cab5424ab6b4c288732ecbfdca 2023-07-14 17:13:39,054 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 892020cab5424ab6b4c288732ecbfdca 2023-07-14 17:13:39,054 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354819054"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354819054"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354819054"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354819054"}]},"ts":"1689354819054"} 2023-07-14 17:13:39,055 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=57 2023-07-14 17:13:39,055 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=57, state=SUCCESS; OpenRegionProcedure ece056dcf807a14999e5af64c21c3ff5, server=jenkins-hbase20.apache.org,38517,1689354813230 in 220 msec 2023-07-14 17:13:39,056 INFO [StoreOpener-892020cab5424ab6b4c288732ecbfdca-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 892020cab5424ab6b4c288732ecbfdca 2023-07-14 17:13:39,057 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ece056dcf807a14999e5af64c21c3ff5, ASSIGN in 392 msec 2023-07-14 17:13:39,059 DEBUG [StoreOpener-892020cab5424ab6b4c288732ecbfdca-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca/f 2023-07-14 17:13:39,059 DEBUG [StoreOpener-892020cab5424ab6b4c288732ecbfdca-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca/f 2023-07-14 17:13:39,059 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=54 2023-07-14 17:13:39,059 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=54, state=SUCCESS; OpenRegionProcedure 7717df2a311e88ba87e9fd764d6ac069, server=jenkins-hbase20.apache.org,42361,1689354809221 in 221 msec 2023-07-14 17:13:39,059 INFO [StoreOpener-892020cab5424ab6b4c288732ecbfdca-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 892020cab5424ab6b4c288732ecbfdca columnFamilyName f 2023-07-14 17:13:39,060 INFO [StoreOpener-892020cab5424ab6b4c288732ecbfdca-1] regionserver.HStore(310): Store=892020cab5424ab6b4c288732ecbfdca/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:39,060 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7717df2a311e88ba87e9fd764d6ac069, ASSIGN in 396 msec 2023-07-14 17:13:39,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca 2023-07-14 17:13:39,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca 2023-07-14 17:13:39,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 892020cab5424ab6b4c288732ecbfdca 2023-07-14 17:13:39,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:39,067 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 892020cab5424ab6b4c288732ecbfdca; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10982607520, jitterRate=0.022835031151771545}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:39,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 892020cab5424ab6b4c288732ecbfdca: 2023-07-14 17:13:39,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca., pid=59, masterSystemTime=1689354818986 2023-07-14 17:13:39,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca. 2023-07-14 17:13:39,070 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca. 2023-07-14 17:13:39,071 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=892020cab5424ab6b4c288732ecbfdca, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:39,071 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354819071"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354819071"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354819071"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354819071"}]},"ts":"1689354819071"} 2023-07-14 17:13:39,075 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=53 2023-07-14 17:13:39,075 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=53, state=SUCCESS; OpenRegionProcedure 892020cab5424ab6b4c288732ecbfdca, server=jenkins-hbase20.apache.org,42361,1689354809221 in 240 msec 2023-07-14 17:13:39,078 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=52 2023-07-14 17:13:39,078 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=892020cab5424ab6b4c288732ecbfdca, ASSIGN in 415 msec 2023-07-14 17:13:39,078 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354819078"}]},"ts":"1689354819078"} 2023-07-14 17:13:39,080 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-14 17:13:39,081 DEBUG [PEWorker-3] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-14 17:13:39,083 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 772 msec 2023-07-14 17:13:39,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-14 17:13:39,431 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-14 17:13:39,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:39,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:39,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:39,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:39,435 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:39,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:39,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:39,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-14 17:13:39,443 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354819442"}]},"ts":"1689354819442"} 2023-07-14 17:13:39,445 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-14 17:13:39,447 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-14 17:13:39,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=892020cab5424ab6b4c288732ecbfdca, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7717df2a311e88ba87e9fd764d6ac069, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8beec090c25553a91db6d6daf8984684, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=87fa5194bfbd03d36825de5de388b609, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ece056dcf807a14999e5af64c21c3ff5, UNASSIGN}] 2023-07-14 17:13:39,451 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=87fa5194bfbd03d36825de5de388b609, UNASSIGN 2023-07-14 17:13:39,451 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7717df2a311e88ba87e9fd764d6ac069, UNASSIGN 2023-07-14 17:13:39,452 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=892020cab5424ab6b4c288732ecbfdca, UNASSIGN 2023-07-14 17:13:39,452 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8beec090c25553a91db6d6daf8984684, UNASSIGN 2023-07-14 17:13:39,452 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ece056dcf807a14999e5af64c21c3ff5, UNASSIGN 2023-07-14 17:13:39,453 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=892020cab5424ab6b4c288732ecbfdca, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:39,453 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=87fa5194bfbd03d36825de5de388b609, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:39,453 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=ece056dcf807a14999e5af64c21c3ff5, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:39,453 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354819453"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354819453"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354819453"}]},"ts":"1689354819453"} 2023-07-14 17:13:39,453 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=8beec090c25553a91db6d6daf8984684, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:39,453 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=7717df2a311e88ba87e9fd764d6ac069, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:39,454 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354818363.8beec090c25553a91db6d6daf8984684.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354819453"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354819453"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354819453"}]},"ts":"1689354819453"} 2023-07-14 17:13:39,454 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354819453"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354819453"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354819453"}]},"ts":"1689354819453"} 2023-07-14 17:13:39,453 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354819453"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354819453"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354819453"}]},"ts":"1689354819453"} 2023-07-14 17:13:39,453 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354819453"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354819453"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354819453"}]},"ts":"1689354819453"} 2023-07-14 17:13:39,456 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=64, state=RUNNABLE; CloseRegionProcedure 892020cab5424ab6b4c288732ecbfdca, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:39,457 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=66, state=RUNNABLE; CloseRegionProcedure 8beec090c25553a91db6d6daf8984684, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:39,459 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=65, state=RUNNABLE; CloseRegionProcedure 7717df2a311e88ba87e9fd764d6ac069, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:39,460 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=68, state=RUNNABLE; CloseRegionProcedure ece056dcf807a14999e5af64c21c3ff5, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:39,461 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=67, state=RUNNABLE; CloseRegionProcedure 87fa5194bfbd03d36825de5de388b609, server=jenkins-hbase20.apache.org,42361,1689354809221}] 2023-07-14 17:13:39,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-14 17:13:39,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 892020cab5424ab6b4c288732ecbfdca 2023-07-14 17:13:39,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 8beec090c25553a91db6d6daf8984684 2023-07-14 17:13:39,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 892020cab5424ab6b4c288732ecbfdca, disabling compactions & flushes 2023-07-14 17:13:39,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca. 2023-07-14 17:13:39,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca. 2023-07-14 17:13:39,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca. after waiting 0 ms 2023-07-14 17:13:39,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca. 2023-07-14 17:13:39,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 8beec090c25553a91db6d6daf8984684, disabling compactions & flushes 2023-07-14 17:13:39,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684. 2023-07-14 17:13:39,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684. 2023-07-14 17:13:39,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684. after waiting 0 ms 2023-07-14 17:13:39,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684. 2023-07-14 17:13:39,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:39,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:39,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684. 2023-07-14 17:13:39,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 8beec090c25553a91db6d6daf8984684: 2023-07-14 17:13:39,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca. 2023-07-14 17:13:39,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 892020cab5424ab6b4c288732ecbfdca: 2023-07-14 17:13:39,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 892020cab5424ab6b4c288732ecbfdca 2023-07-14 17:13:39,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 7717df2a311e88ba87e9fd764d6ac069 2023-07-14 17:13:39,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 7717df2a311e88ba87e9fd764d6ac069, disabling compactions & flushes 2023-07-14 17:13:39,632 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069. 2023-07-14 17:13:39,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069. 2023-07-14 17:13:39,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069. after waiting 0 ms 2023-07-14 17:13:39,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069. 2023-07-14 17:13:39,633 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=892020cab5424ab6b4c288732ecbfdca, regionState=CLOSED 2023-07-14 17:13:39,633 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354819633"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354819633"}]},"ts":"1689354819633"} 2023-07-14 17:13:39,633 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 8beec090c25553a91db6d6daf8984684 2023-07-14 17:13:39,633 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close ece056dcf807a14999e5af64c21c3ff5 2023-07-14 17:13:39,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing ece056dcf807a14999e5af64c21c3ff5, disabling compactions & flushes 2023-07-14 17:13:39,634 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5. 2023-07-14 17:13:39,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5. 2023-07-14 17:13:39,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5. after waiting 0 ms 2023-07-14 17:13:39,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5. 2023-07-14 17:13:39,635 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=8beec090c25553a91db6d6daf8984684, regionState=CLOSED 2023-07-14 17:13:39,635 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354818363.8beec090c25553a91db6d6daf8984684.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354819635"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354819635"}]},"ts":"1689354819635"} 2023-07-14 17:13:39,638 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=64 2023-07-14 17:13:39,638 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=64, state=SUCCESS; CloseRegionProcedure 892020cab5424ab6b4c288732ecbfdca, server=jenkins-hbase20.apache.org,42361,1689354809221 in 180 msec 2023-07-14 17:13:39,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:39,640 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=66 2023-07-14 17:13:39,640 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; CloseRegionProcedure 8beec090c25553a91db6d6daf8984684, server=jenkins-hbase20.apache.org,38517,1689354813230 in 180 msec 2023-07-14 17:13:39,640 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=892020cab5424ab6b4c288732ecbfdca, UNASSIGN in 190 msec 2023-07-14 17:13:39,640 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069. 2023-07-14 17:13:39,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 7717df2a311e88ba87e9fd764d6ac069: 2023-07-14 17:13:39,641 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8beec090c25553a91db6d6daf8984684, UNASSIGN in 192 msec 2023-07-14 17:13:39,641 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:39,642 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5. 2023-07-14 17:13:39,642 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for ece056dcf807a14999e5af64c21c3ff5: 2023-07-14 17:13:39,643 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 7717df2a311e88ba87e9fd764d6ac069 2023-07-14 17:13:39,643 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 87fa5194bfbd03d36825de5de388b609 2023-07-14 17:13:39,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 87fa5194bfbd03d36825de5de388b609, disabling compactions & flushes 2023-07-14 17:13:39,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609. 2023-07-14 17:13:39,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609. 2023-07-14 17:13:39,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609. after waiting 0 ms 2023-07-14 17:13:39,644 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=7717df2a311e88ba87e9fd764d6ac069, regionState=CLOSED 2023-07-14 17:13:39,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609. 2023-07-14 17:13:39,644 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354819644"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354819644"}]},"ts":"1689354819644"} 2023-07-14 17:13:39,645 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed ece056dcf807a14999e5af64c21c3ff5 2023-07-14 17:13:39,646 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=ece056dcf807a14999e5af64c21c3ff5, regionState=CLOSED 2023-07-14 17:13:39,646 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689354819646"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354819646"}]},"ts":"1689354819646"} 2023-07-14 17:13:39,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:39,651 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=65 2023-07-14 17:13:39,651 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=65, state=SUCCESS; CloseRegionProcedure 7717df2a311e88ba87e9fd764d6ac069, server=jenkins-hbase20.apache.org,42361,1689354809221 in 187 msec 2023-07-14 17:13:39,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609. 2023-07-14 17:13:39,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 87fa5194bfbd03d36825de5de388b609: 2023-07-14 17:13:39,653 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=68 2023-07-14 17:13:39,653 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=68, state=SUCCESS; CloseRegionProcedure ece056dcf807a14999e5af64c21c3ff5, server=jenkins-hbase20.apache.org,38517,1689354813230 in 188 msec 2023-07-14 17:13:39,654 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7717df2a311e88ba87e9fd764d6ac069, UNASSIGN in 203 msec 2023-07-14 17:13:39,654 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 87fa5194bfbd03d36825de5de388b609 2023-07-14 17:13:39,655 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ece056dcf807a14999e5af64c21c3ff5, UNASSIGN in 205 msec 2023-07-14 17:13:39,655 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=87fa5194bfbd03d36825de5de388b609, regionState=CLOSED 2023-07-14 17:13:39,655 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689354819655"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354819655"}]},"ts":"1689354819655"} 2023-07-14 17:13:39,659 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=67 2023-07-14 17:13:39,659 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=67, state=SUCCESS; CloseRegionProcedure 87fa5194bfbd03d36825de5de388b609, server=jenkins-hbase20.apache.org,42361,1689354809221 in 196 msec 2023-07-14 17:13:39,661 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=63 2023-07-14 17:13:39,661 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=87fa5194bfbd03d36825de5de388b609, UNASSIGN in 211 msec 2023-07-14 17:13:39,662 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354819662"}]},"ts":"1689354819662"} 2023-07-14 17:13:39,664 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-14 17:13:39,665 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-14 17:13:39,668 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 230 msec 2023-07-14 17:13:39,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-14 17:13:39,747 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-14 17:13:39,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:39,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:39,766 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:39,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1337281511' 2023-07-14 17:13:39,767 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:39,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:39,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:39,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:39,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:39,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-14 17:13:39,785 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca 2023-07-14 17:13:39,785 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069 2023-07-14 17:13:39,786 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684 2023-07-14 17:13:39,786 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5 2023-07-14 17:13:39,786 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609 2023-07-14 17:13:39,790 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069/recovered.edits] 2023-07-14 17:13:39,791 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca/recovered.edits] 2023-07-14 17:13:39,791 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609/recovered.edits] 2023-07-14 17:13:39,791 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684/recovered.edits] 2023-07-14 17:13:39,791 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5/recovered.edits] 2023-07-14 17:13:39,803 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca/recovered.edits/4.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca/recovered.edits/4.seqid 2023-07-14 17:13:39,803 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609/recovered.edits/4.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609/recovered.edits/4.seqid 2023-07-14 17:13:39,804 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/892020cab5424ab6b4c288732ecbfdca 2023-07-14 17:13:39,805 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/87fa5194bfbd03d36825de5de388b609 2023-07-14 17:13:39,805 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5/recovered.edits/4.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5/recovered.edits/4.seqid 2023-07-14 17:13:39,806 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684/recovered.edits/4.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684/recovered.edits/4.seqid 2023-07-14 17:13:39,806 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069/recovered.edits/4.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069/recovered.edits/4.seqid 2023-07-14 17:13:39,806 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ece056dcf807a14999e5af64c21c3ff5 2023-07-14 17:13:39,807 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8beec090c25553a91db6d6daf8984684 2023-07-14 17:13:39,807 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7717df2a311e88ba87e9fd764d6ac069 2023-07-14 17:13:39,807 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-14 17:13:39,811 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:39,821 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-14 17:13:39,824 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-14 17:13:39,826 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:39,826 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-14 17:13:39,827 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354819826"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:39,827 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354819826"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:39,827 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689354818363.8beec090c25553a91db6d6daf8984684.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354819826"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:39,827 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354819826"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:39,827 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354819826"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:39,830 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-14 17:13:39,830 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 892020cab5424ab6b4c288732ecbfdca, NAME => 'Group_testTableMoveTruncateAndDrop,,1689354818363.892020cab5424ab6b4c288732ecbfdca.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 7717df2a311e88ba87e9fd764d6ac069, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689354818363.7717df2a311e88ba87e9fd764d6ac069.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 8beec090c25553a91db6d6daf8984684, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689354818363.8beec090c25553a91db6d6daf8984684.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 87fa5194bfbd03d36825de5de388b609, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689354818363.87fa5194bfbd03d36825de5de388b609.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => ece056dcf807a14999e5af64c21c3ff5, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689354818363.ece056dcf807a14999e5af64c21c3ff5.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-14 17:13:39,831 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-14 17:13:39,831 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689354819831"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:39,833 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-14 17:13:39,835 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 17:13:39,836 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 78 msec 2023-07-14 17:13:39,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-14 17:13:39,887 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-14 17:13:39,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:39,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:39,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:39,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:39,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:39,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:39,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:39,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361] to rsgroup default 2023-07-14 17:13:39,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:39,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:39,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:39,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:39,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1337281511, current retry=0 2023-07-14 17:13:39,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,38517,1689354813230, jenkins-hbase20.apache.org,42361,1689354809221] are moved back to Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:39,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1337281511 => default 2023-07-14 17:13:39,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:39,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_testTableMoveTruncateAndDrop_1337281511 2023-07-14 17:13:39,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:39,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:39,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 17:13:39,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:39,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:39,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:39,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:39,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:39,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:39,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:39,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:39,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:39,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:39,947 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:39,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:39,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:39,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:39,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:39,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:39,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:39,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:39,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:39,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:39,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 147 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356019978, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:39,979 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:39,982 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:39,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:39,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:39,984 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:39,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:39,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:40,017 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=502 (was 423) Potentially hanging thread: hconnection-0x523b891-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-103047219-148.251.75.209-1689354803494:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1972365421_17 at /127.0.0.1:34498 [Receiving block BP-103047219-148.251.75.209-1689354803494:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_979727792_17 at /127.0.0.1:34602 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase20:38517-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x523b891-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-103047219-148.251.75.209-1689354803494:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:3;jenkins-hbase20:38517 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x523b891-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1469634444_17 at /127.0.0.1:34532 [Receiving block BP-103047219-148.251.75.209-1689354803494:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1969264531-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1969264531-636 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1647920887.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:37685 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1469634444_17 at /127.0.0.1:57828 [Receiving block BP-103047219-148.251.75.209-1689354803494:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a-prefix:jenkins-hbase20.apache.org,38517,1689354813230 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a-prefix:jenkins-hbase20.apache.org,44093,1689354809062.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x523b891-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1972365421_17 at /127.0.0.1:57896 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54612@0x6f382546-SendThread(127.0.0.1:54612) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase20:38517Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54612@0x6f382546 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/849091510.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-103047219-148.251.75.209-1689354803494:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1972365421_17 at /127.0.0.1:57790 [Receiving block BP-103047219-148.251.75.209-1689354803494:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1969264531-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-103047219-148.251.75.209-1689354803494:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x523b891-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1969264531-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1469634444_17 at /127.0.0.1:54298 [Receiving block BP-103047219-148.251.75.209-1689354803494:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1969264531-637-acceptor-0@42ec5c91-ServerConnector@4d21a747{HTTP/1.1, (http/1.1)}{0.0.0.0:36911} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1969264531-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-1bc047b8-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1969264531-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54612@0x6f382546-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1972365421_17 at /127.0.0.1:54268 [Receiving block BP-103047219-148.251.75.209-1689354803494:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_979727792_17 at /127.0.0.1:54110 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1969264531-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (943951222) connection to localhost.localdomain/127.0.0.1:37685 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-103047219-148.251.75.209-1689354803494:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-103047219-148.251.75.209-1689354803494:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x523b891-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=797 (was 697) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=574 (was 572) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 173), AvailableMemoryMB=3781 (was 4146) 2023-07-14 17:13:40,018 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-14 17:13:40,035 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=502, OpenFileDescriptor=797, MaxFileDescriptor=60000, SystemLoadAverage=574, ProcessCount=173, AvailableMemoryMB=3779 2023-07-14 17:13:40,038 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-14 17:13:40,039 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-14 17:13:40,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:40,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:40,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:40,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:40,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:40,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:40,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:40,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:40,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:40,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:40,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:40,057 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:40,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:40,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:40,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:40,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:40,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:40,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:40,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:40,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:40,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:40,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 175 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356020083, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:40,084 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:40,086 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:40,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:40,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:40,088 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:40,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:40,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:40,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup foo* 2023-07-14 17:13:40,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:40,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 181 service: MasterService methodName: ExecMasterService size: 83 connection: 148.251.75.209:33882 deadline: 1689356020090, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-14 17:13:40,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup foo@ 2023-07-14 17:13:40,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:40,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 148.251.75.209:33882 deadline: 1689356020091, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-14 17:13:40,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup - 2023-07-14 17:13:40,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:40,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 80 connection: 148.251.75.209:33882 deadline: 1689356020092, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-14 17:13:40,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup foo_123 2023-07-14 17:13:40,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-14 17:13:40,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:40,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:40,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:40,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:40,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:40,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:40,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:40,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:40,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:40,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:40,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:40,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:40,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:40,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup foo_123 2023-07-14 17:13:40,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:40,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:40,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 17:13:40,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:40,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:40,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:40,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:40,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:40,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:40,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:40,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:40,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:40,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:40,135 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:40,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:40,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:40,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:40,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:40,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:40,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:40,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:40,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:40,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:40,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 219 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356020149, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:40,150 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:40,152 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:40,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:40,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:40,154 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:40,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:40,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:40,174 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=505 (was 502) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=797 (was 797), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=574 (was 574), ProcessCount=173 (was 173), AvailableMemoryMB=3775 (was 3779) 2023-07-14 17:13:40,174 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-14 17:13:40,188 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=505, OpenFileDescriptor=797, MaxFileDescriptor=60000, SystemLoadAverage=574, ProcessCount=173, AvailableMemoryMB=3773 2023-07-14 17:13:40,189 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-14 17:13:40,189 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-14 17:13:40,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:40,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:40,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:40,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:40,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:40,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:40,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:40,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:40,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:40,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:40,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:40,208 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:40,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:40,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:40,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:40,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:40,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:40,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:40,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:40,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:40,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:40,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356020224, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:40,224 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:40,226 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:40,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:40,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:40,227 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:40,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:40,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:40,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:40,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:40,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:40,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:40,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup bar 2023-07-14 17:13:40,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:40,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-14 17:13:40,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:40,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:40,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:40,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:40,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:40,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:42361] to rsgroup bar 2023-07-14 17:13:40,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:40,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-14 17:13:40,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:40,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:40,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(238): Moving server region 773f58cde6eff004015f5064f08a8726, which do not belong to RSGroup bar 2023-07-14 17:13:40,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=773f58cde6eff004015f5064f08a8726, REOPEN/MOVE 2023-07-14 17:13:40,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-14 17:13:40,252 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=773f58cde6eff004015f5064f08a8726, REOPEN/MOVE 2023-07-14 17:13:40,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-14 17:13:40,254 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=773f58cde6eff004015f5064f08a8726, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:40,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-14 17:13:40,255 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-14 17:13:40,255 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689354820253"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354820253"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354820253"}]},"ts":"1689354820253"} 2023-07-14 17:13:40,256 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44093,1689354809062, state=CLOSING 2023-07-14 17:13:40,257 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure 773f58cde6eff004015f5064f08a8726, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:40,259 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 17:13:40,260 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=76, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:40,260 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 17:13:40,411 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:40,411 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-14 17:13:40,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 773f58cde6eff004015f5064f08a8726, disabling compactions & flushes 2023-07-14 17:13:40,413 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 17:13:40,413 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:40,413 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 17:13:40,413 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 17:13:40,413 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 17:13:40,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:40,413 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 17:13:40,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. after waiting 0 ms 2023-07-14 17:13:40,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:40,413 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=38.01 KB heapSize=58.27 KB 2023-07-14 17:13:40,413 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 773f58cde6eff004015f5064f08a8726 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-14 17:13:40,445 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=35.12 KB at sequenceid=97 (bloomFilter=false), to=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/info/7a7a5190269f4907accc7edecfb95b60 2023-07-14 17:13:40,445 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726/.tmp/info/64073c0d999f4171bd1e9fc02eaf202f 2023-07-14 17:13:40,452 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7a7a5190269f4907accc7edecfb95b60 2023-07-14 17:13:40,453 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726/.tmp/info/64073c0d999f4171bd1e9fc02eaf202f as hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726/info/64073c0d999f4171bd1e9fc02eaf202f 2023-07-14 17:13:40,461 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726/info/64073c0d999f4171bd1e9fc02eaf202f, entries=2, sequenceid=6, filesize=4.8 K 2023-07-14 17:13:40,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 773f58cde6eff004015f5064f08a8726 in 51ms, sequenceid=6, compaction requested=false 2023-07-14 17:13:40,480 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-14 17:13:40,484 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:40,484 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 773f58cde6eff004015f5064f08a8726: 2023-07-14 17:13:40,485 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 773f58cde6eff004015f5064f08a8726 move to jenkins-hbase20.apache.org,46457,1689354809303 record at close sequenceid=6 2023-07-14 17:13:40,486 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure 773f58cde6eff004015f5064f08a8726, server=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:40,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:40,489 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=97 (bloomFilter=false), to=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/rep_barrier/6c151122697740de8d3cb9b9f6eaeb1e 2023-07-14 17:13:40,497 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6c151122697740de8d3cb9b9f6eaeb1e 2023-07-14 17:13:40,516 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.73 KB at sequenceid=97 (bloomFilter=false), to=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/table/39f88b9bdff24e6b892d31e5d675675b 2023-07-14 17:13:40,523 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 39f88b9bdff24e6b892d31e5d675675b 2023-07-14 17:13:40,525 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/info/7a7a5190269f4907accc7edecfb95b60 as hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info/7a7a5190269f4907accc7edecfb95b60 2023-07-14 17:13:40,533 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7a7a5190269f4907accc7edecfb95b60 2023-07-14 17:13:40,534 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info/7a7a5190269f4907accc7edecfb95b60, entries=23, sequenceid=97, filesize=7.5 K 2023-07-14 17:13:40,535 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/rep_barrier/6c151122697740de8d3cb9b9f6eaeb1e as hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/rep_barrier/6c151122697740de8d3cb9b9f6eaeb1e 2023-07-14 17:13:40,542 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6c151122697740de8d3cb9b9f6eaeb1e 2023-07-14 17:13:40,542 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/rep_barrier/6c151122697740de8d3cb9b9f6eaeb1e, entries=10, sequenceid=97, filesize=6.1 K 2023-07-14 17:13:40,544 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/table/39f88b9bdff24e6b892d31e5d675675b as hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table/39f88b9bdff24e6b892d31e5d675675b 2023-07-14 17:13:40,553 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 39f88b9bdff24e6b892d31e5d675675b 2023-07-14 17:13:40,553 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table/39f88b9bdff24e6b892d31e5d675675b, entries=11, sequenceid=97, filesize=6.0 K 2023-07-14 17:13:40,554 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~38.01 KB/38921, heapSize ~58.23 KB/59624, currentSize=0 B/0 for 1588230740 in 141ms, sequenceid=97, compaction requested=false 2023-07-14 17:13:40,568 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/recovered.edits/100.seqid, newMaxSeqId=100, maxSeqId=17 2023-07-14 17:13:40,574 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 17:13:40,575 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 17:13:40,576 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 17:13:40,576 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase20.apache.org,46457,1689354809303 record at close sequenceid=97 2023-07-14 17:13:40,579 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-14 17:13:40,579 WARN [PEWorker-5] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-14 17:13:40,581 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=76 2023-07-14 17:13:40,581 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=76, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44093,1689354809062 in 320 msec 2023-07-14 17:13:40,583 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,46457,1689354809303; forceNewPlan=false, retain=false 2023-07-14 17:13:40,733 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,46457,1689354809303, state=OPENING 2023-07-14 17:13:40,734 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 17:13:40,734 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 17:13:40,737 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=76, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:40,900 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-14 17:13:40,900 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:13:40,902 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46457%2C1689354809303.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,46457,1689354809303, archiveDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs, maxLogs=32 2023-07-14 17:13:40,917 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK] 2023-07-14 17:13:40,918 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK] 2023-07-14 17:13:40,919 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK] 2023-07-14 17:13:40,923 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/WALs/jenkins-hbase20.apache.org,46457,1689354809303/jenkins-hbase20.apache.org%2C46457%2C1689354809303.meta.1689354820903.meta 2023-07-14 17:13:40,923 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33411,DS-b506ef08-1752-4b8c-8067-a73e3b0f1923,DISK], DatanodeInfoWithStorage[127.0.0.1:34029,DS-a864465c-e7eb-46d5-9a63-6dd6d85e72b7,DISK], DatanodeInfoWithStorage[127.0.0.1:39185,DS-6e62d309-3f23-47da-aa86-9f8ebbe68b38,DISK]] 2023-07-14 17:13:40,923 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:40,924 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 17:13:40,924 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-14 17:13:40,924 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-14 17:13:40,924 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-14 17:13:40,924 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:40,924 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-14 17:13:40,924 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-14 17:13:40,927 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 17:13:40,927 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info 2023-07-14 17:13:40,928 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info 2023-07-14 17:13:40,928 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 17:13:40,935 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info/0e82c32b62ac4c98992d2d3a10a44f24 2023-07-14 17:13:40,942 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7a7a5190269f4907accc7edecfb95b60 2023-07-14 17:13:40,942 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info/7a7a5190269f4907accc7edecfb95b60 2023-07-14 17:13:40,942 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:40,942 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 17:13:40,943 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/rep_barrier 2023-07-14 17:13:40,943 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/rep_barrier 2023-07-14 17:13:40,944 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 17:13:40,953 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6c151122697740de8d3cb9b9f6eaeb1e 2023-07-14 17:13:40,953 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/rep_barrier/6c151122697740de8d3cb9b9f6eaeb1e 2023-07-14 17:13:40,953 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:40,953 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 17:13:40,955 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table 2023-07-14 17:13:40,955 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table 2023-07-14 17:13:40,955 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 17:13:40,965 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table/3460883fcabf46ff8c029dc735128b22 2023-07-14 17:13:40,975 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 39f88b9bdff24e6b892d31e5d675675b 2023-07-14 17:13:40,975 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table/39f88b9bdff24e6b892d31e5d675675b 2023-07-14 17:13:40,975 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:40,977 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740 2023-07-14 17:13:40,979 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740 2023-07-14 17:13:40,982 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 17:13:40,985 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 17:13:40,986 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=101; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9718925920, jitterRate=-0.09485448896884918}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 17:13:40,986 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 17:13:40,988 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=79, masterSystemTime=1689354820893 2023-07-14 17:13:40,991 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-14 17:13:40,991 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-14 17:13:40,992 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,46457,1689354809303, state=OPEN 2023-07-14 17:13:40,993 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 17:13:40,993 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 17:13:40,995 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=773f58cde6eff004015f5064f08a8726, regionState=CLOSED 2023-07-14 17:13:40,995 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689354820995"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354820995"}]},"ts":"1689354820995"} 2023-07-14 17:13:40,996 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44093] ipc.CallRunner(144): callId: 179 service: ClientService methodName: Mutate size: 218 connection: 148.251.75.209:38872 deadline: 1689354880996, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=46457 startCode=1689354809303. As of locationSeqNum=97. 2023-07-14 17:13:40,998 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=76 2023-07-14 17:13:40,999 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=76, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,46457,1689354809303 in 259 msec 2023-07-14 17:13:41,001 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=76, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 746 msec 2023-07-14 17:13:41,101 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-14 17:13:41,101 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; CloseRegionProcedure 773f58cde6eff004015f5064f08a8726, server=jenkins-hbase20.apache.org,44093,1689354809062 in 842 msec 2023-07-14 17:13:41,102 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=773f58cde6eff004015f5064f08a8726, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,46457,1689354809303; forceNewPlan=false, retain=false 2023-07-14 17:13:41,252 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=773f58cde6eff004015f5064f08a8726, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:41,252 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689354821252"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354821252"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354821252"}]},"ts":"1689354821252"} 2023-07-14 17:13:41,254 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=75, state=RUNNABLE; OpenRegionProcedure 773f58cde6eff004015f5064f08a8726, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:41,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-14 17:13:41,408 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:41,409 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 773f58cde6eff004015f5064f08a8726, NAME => 'hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:41,409 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:41,409 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:41,409 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:41,409 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:41,410 INFO [StoreOpener-773f58cde6eff004015f5064f08a8726-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:41,412 DEBUG [StoreOpener-773f58cde6eff004015f5064f08a8726-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726/info 2023-07-14 17:13:41,412 DEBUG [StoreOpener-773f58cde6eff004015f5064f08a8726-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726/info 2023-07-14 17:13:41,412 INFO [StoreOpener-773f58cde6eff004015f5064f08a8726-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 773f58cde6eff004015f5064f08a8726 columnFamilyName info 2023-07-14 17:13:41,422 DEBUG [StoreOpener-773f58cde6eff004015f5064f08a8726-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726/info/64073c0d999f4171bd1e9fc02eaf202f 2023-07-14 17:13:41,422 INFO [StoreOpener-773f58cde6eff004015f5064f08a8726-1] regionserver.HStore(310): Store=773f58cde6eff004015f5064f08a8726/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:41,424 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:41,426 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:41,429 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:41,431 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 773f58cde6eff004015f5064f08a8726; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9472111840, jitterRate=-0.11784084141254425}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:41,431 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 773f58cde6eff004015f5064f08a8726: 2023-07-14 17:13:41,432 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726., pid=80, masterSystemTime=1689354821405 2023-07-14 17:13:41,433 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:41,434 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:41,434 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=773f58cde6eff004015f5064f08a8726, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:41,434 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689354821434"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354821434"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354821434"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354821434"}]},"ts":"1689354821434"} 2023-07-14 17:13:41,444 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=75 2023-07-14 17:13:41,445 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=75, state=SUCCESS; OpenRegionProcedure 773f58cde6eff004015f5064f08a8726, server=jenkins-hbase20.apache.org,46457,1689354809303 in 187 msec 2023-07-14 17:13:41,446 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=773f58cde6eff004015f5064f08a8726, REOPEN/MOVE in 1.1940 sec 2023-07-14 17:13:42,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,38517,1689354813230, jenkins-hbase20.apache.org,42361,1689354809221, jenkins-hbase20.apache.org,44093,1689354809062] are moved back to default 2023-07-14 17:13:42,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-14 17:13:42,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:42,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:42,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:42,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=bar 2023-07-14 17:13:42,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:42,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:13:42,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-14 17:13:42,279 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:13:42,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-14 17:13:42,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-14 17:13:42,289 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:42,290 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-14 17:13:42,290 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:42,291 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:42,295 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:13:42,297 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:42,297 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470 empty. 2023-07-14 17:13:42,298 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:42,298 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-14 17:13:42,314 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-14 17:13:42,315 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => ec9f9dcd90f2bb9523374ddd5e2a5470, NAME => 'Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:42,334 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:42,334 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing ec9f9dcd90f2bb9523374ddd5e2a5470, disabling compactions & flushes 2023-07-14 17:13:42,334 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:42,334 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:42,334 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. after waiting 0 ms 2023-07-14 17:13:42,334 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:42,334 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:42,334 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for ec9f9dcd90f2bb9523374ddd5e2a5470: 2023-07-14 17:13:42,338 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:13:42,339 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689354822339"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354822339"}]},"ts":"1689354822339"} 2023-07-14 17:13:42,343 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:13:42,347 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:13:42,347 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354822347"}]},"ts":"1689354822347"} 2023-07-14 17:13:42,349 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-14 17:13:42,353 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, ASSIGN}] 2023-07-14 17:13:42,356 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, ASSIGN 2023-07-14 17:13:42,357 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46457,1689354809303; forceNewPlan=false, retain=false 2023-07-14 17:13:42,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-14 17:13:42,510 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=ec9f9dcd90f2bb9523374ddd5e2a5470, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:42,510 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689354822510"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354822510"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354822510"}]},"ts":"1689354822510"} 2023-07-14 17:13:42,513 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure ec9f9dcd90f2bb9523374ddd5e2a5470, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:42,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-14 17:13:42,672 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:42,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ec9f9dcd90f2bb9523374ddd5e2a5470, NAME => 'Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:42,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:42,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:42,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:42,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:42,676 INFO [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:42,680 DEBUG [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/f 2023-07-14 17:13:42,680 DEBUG [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/f 2023-07-14 17:13:42,681 INFO [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ec9f9dcd90f2bb9523374ddd5e2a5470 columnFamilyName f 2023-07-14 17:13:42,682 INFO [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] regionserver.HStore(310): Store=ec9f9dcd90f2bb9523374ddd5e2a5470/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:42,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:42,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:42,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:42,693 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:42,694 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened ec9f9dcd90f2bb9523374ddd5e2a5470; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11605060480, jitterRate=0.08080548048019409}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:42,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for ec9f9dcd90f2bb9523374ddd5e2a5470: 2023-07-14 17:13:42,695 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470., pid=83, masterSystemTime=1689354822667 2023-07-14 17:13:42,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:42,697 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:42,698 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=ec9f9dcd90f2bb9523374ddd5e2a5470, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:42,698 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689354822698"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354822698"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354822698"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354822698"}]},"ts":"1689354822698"} 2023-07-14 17:13:42,702 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-14 17:13:42,702 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure ec9f9dcd90f2bb9523374ddd5e2a5470, server=jenkins-hbase20.apache.org,46457,1689354809303 in 187 msec 2023-07-14 17:13:42,704 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-14 17:13:42,704 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, ASSIGN in 349 msec 2023-07-14 17:13:42,705 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:13:42,705 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354822705"}]},"ts":"1689354822705"} 2023-07-14 17:13:42,707 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-14 17:13:42,709 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:13:42,712 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 445 msec 2023-07-14 17:13:42,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-14 17:13:42,888 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-14 17:13:42,888 DEBUG [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-14 17:13:42,888 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:42,891 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44093] ipc.CallRunner(144): callId: 276 service: ClientService methodName: Scan size: 96 connection: 148.251.75.209:38878 deadline: 1689354882890, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=46457 startCode=1689354809303. As of locationSeqNum=97. 2023-07-14 17:13:42,951 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-14 17:13:42,993 DEBUG [hconnection-0x37307bc-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:13:43,007 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:57880, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:13:43,019 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-14 17:13:43,019 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:43,019 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-14 17:13:43,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-14 17:13:43,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:43,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-14 17:13:43,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:43,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:43,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-14 17:13:43,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(345): Moving region ec9f9dcd90f2bb9523374ddd5e2a5470 to RSGroup bar 2023-07-14 17:13:43,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:43,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:43,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:43,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:13:43,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-14 17:13:43,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:43,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, REOPEN/MOVE 2023-07-14 17:13:43,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-14 17:13:43,039 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, REOPEN/MOVE 2023-07-14 17:13:43,040 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ec9f9dcd90f2bb9523374ddd5e2a5470, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:43,041 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689354823040"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354823040"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354823040"}]},"ts":"1689354823040"} 2023-07-14 17:13:43,042 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure ec9f9dcd90f2bb9523374ddd5e2a5470, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:43,197 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:43,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing ec9f9dcd90f2bb9523374ddd5e2a5470, disabling compactions & flushes 2023-07-14 17:13:43,199 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:43,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:43,200 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. after waiting 0 ms 2023-07-14 17:13:43,200 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:43,206 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:43,209 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:43,209 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for ec9f9dcd90f2bb9523374ddd5e2a5470: 2023-07-14 17:13:43,209 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding ec9f9dcd90f2bb9523374ddd5e2a5470 move to jenkins-hbase20.apache.org,44093,1689354809062 record at close sequenceid=2 2023-07-14 17:13:43,211 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:43,212 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ec9f9dcd90f2bb9523374ddd5e2a5470, regionState=CLOSED 2023-07-14 17:13:43,212 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689354823212"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354823212"}]},"ts":"1689354823212"} 2023-07-14 17:13:43,216 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-14 17:13:43,217 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure ec9f9dcd90f2bb9523374ddd5e2a5470, server=jenkins-hbase20.apache.org,46457,1689354809303 in 172 msec 2023-07-14 17:13:43,217 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44093,1689354809062; forceNewPlan=false, retain=false 2023-07-14 17:13:43,368 INFO [jenkins-hbase20:41281] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 17:13:43,368 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ec9f9dcd90f2bb9523374ddd5e2a5470, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:43,368 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689354823368"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354823368"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354823368"}]},"ts":"1689354823368"} 2023-07-14 17:13:43,370 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure ec9f9dcd90f2bb9523374ddd5e2a5470, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:43,527 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:43,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ec9f9dcd90f2bb9523374ddd5e2a5470, NAME => 'Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:43,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:43,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:43,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:43,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:43,530 INFO [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:43,531 DEBUG [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/f 2023-07-14 17:13:43,531 DEBUG [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/f 2023-07-14 17:13:43,531 INFO [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ec9f9dcd90f2bb9523374ddd5e2a5470 columnFamilyName f 2023-07-14 17:13:43,532 INFO [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] regionserver.HStore(310): Store=ec9f9dcd90f2bb9523374ddd5e2a5470/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:43,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:43,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:43,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:43,540 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened ec9f9dcd90f2bb9523374ddd5e2a5470; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10137467520, jitterRate=-0.055874764919281006}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:43,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for ec9f9dcd90f2bb9523374ddd5e2a5470: 2023-07-14 17:13:43,541 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470., pid=86, masterSystemTime=1689354823522 2023-07-14 17:13:43,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:43,542 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:43,543 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ec9f9dcd90f2bb9523374ddd5e2a5470, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:43,543 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689354823542"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354823542"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354823542"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354823542"}]},"ts":"1689354823542"} 2023-07-14 17:13:43,546 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-14 17:13:43,546 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure ec9f9dcd90f2bb9523374ddd5e2a5470, server=jenkins-hbase20.apache.org,44093,1689354809062 in 174 msec 2023-07-14 17:13:43,547 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, REOPEN/MOVE in 509 msec 2023-07-14 17:13:43,947 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-14 17:13:44,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-14 17:13:44,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-14 17:13:44,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:44,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:44,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:44,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=bar 2023-07-14 17:13:44,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:44,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bar 2023-07-14 17:13:44,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:44,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 85 connection: 148.251.75.209:33882 deadline: 1689356024049, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-14 17:13:44,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:42361] to rsgroup default 2023-07-14 17:13:44,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:44,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 288 service: MasterService methodName: ExecMasterService size: 191 connection: 148.251.75.209:33882 deadline: 1689356024050, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-14 17:13:44,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-14 17:13:44,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:44,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-14 17:13:44,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:44,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:44,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-14 17:13:44,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(345): Moving region ec9f9dcd90f2bb9523374ddd5e2a5470 to RSGroup default 2023-07-14 17:13:44,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, REOPEN/MOVE 2023-07-14 17:13:44,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-14 17:13:44,060 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, REOPEN/MOVE 2023-07-14 17:13:44,061 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ec9f9dcd90f2bb9523374ddd5e2a5470, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:44,061 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689354824061"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354824061"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354824061"}]},"ts":"1689354824061"} 2023-07-14 17:13:44,065 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure ec9f9dcd90f2bb9523374ddd5e2a5470, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:44,217 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:44,220 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing ec9f9dcd90f2bb9523374ddd5e2a5470, disabling compactions & flushes 2023-07-14 17:13:44,220 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:44,220 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:44,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. after waiting 0 ms 2023-07-14 17:13:44,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:44,227 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 17:13:44,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:44,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for ec9f9dcd90f2bb9523374ddd5e2a5470: 2023-07-14 17:13:44,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding ec9f9dcd90f2bb9523374ddd5e2a5470 move to jenkins-hbase20.apache.org,46457,1689354809303 record at close sequenceid=5 2023-07-14 17:13:44,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:44,232 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ec9f9dcd90f2bb9523374ddd5e2a5470, regionState=CLOSED 2023-07-14 17:13:44,232 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689354824231"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354824231"}]},"ts":"1689354824231"} 2023-07-14 17:13:44,235 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-14 17:13:44,235 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure ec9f9dcd90f2bb9523374ddd5e2a5470, server=jenkins-hbase20.apache.org,44093,1689354809062 in 171 msec 2023-07-14 17:13:44,236 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,46457,1689354809303; forceNewPlan=false, retain=false 2023-07-14 17:13:44,386 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ec9f9dcd90f2bb9523374ddd5e2a5470, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:44,386 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689354824386"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354824386"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354824386"}]},"ts":"1689354824386"} 2023-07-14 17:13:44,388 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure ec9f9dcd90f2bb9523374ddd5e2a5470, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:44,545 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:44,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ec9f9dcd90f2bb9523374ddd5e2a5470, NAME => 'Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:44,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:44,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:44,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:44,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:44,547 INFO [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:44,548 DEBUG [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/f 2023-07-14 17:13:44,549 DEBUG [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/f 2023-07-14 17:13:44,549 INFO [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ec9f9dcd90f2bb9523374ddd5e2a5470 columnFamilyName f 2023-07-14 17:13:44,550 INFO [StoreOpener-ec9f9dcd90f2bb9523374ddd5e2a5470-1] regionserver.HStore(310): Store=ec9f9dcd90f2bb9523374ddd5e2a5470/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:44,551 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:44,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:44,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:44,557 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened ec9f9dcd90f2bb9523374ddd5e2a5470; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10153812800, jitterRate=-0.05435249209403992}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:44,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for ec9f9dcd90f2bb9523374ddd5e2a5470: 2023-07-14 17:13:44,558 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470., pid=89, masterSystemTime=1689354824540 2023-07-14 17:13:44,560 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:44,560 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:44,560 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ec9f9dcd90f2bb9523374ddd5e2a5470, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:44,560 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689354824560"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354824560"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354824560"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354824560"}]},"ts":"1689354824560"} 2023-07-14 17:13:44,564 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-14 17:13:44,564 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure ec9f9dcd90f2bb9523374ddd5e2a5470, server=jenkins-hbase20.apache.org,46457,1689354809303 in 174 msec 2023-07-14 17:13:44,565 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, REOPEN/MOVE in 506 msec 2023-07-14 17:13:45,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-14 17:13:45,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-14 17:13:45,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:45,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:45,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:45,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bar 2023-07-14 17:13:45,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:45,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 295 service: MasterService methodName: ExecMasterService size: 85 connection: 148.251.75.209:33882 deadline: 1689356025067, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-14 17:13:45,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:42361] to rsgroup default 2023-07-14 17:13:45,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:45,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-14 17:13:45,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:45,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:45,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-14 17:13:45,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,38517,1689354813230, jenkins-hbase20.apache.org,42361,1689354809221, jenkins-hbase20.apache.org,44093,1689354809062] are moved back to bar 2023-07-14 17:13:45,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-14 17:13:45,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:45,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:45,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:45,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bar 2023-07-14 17:13:45,083 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44093] ipc.CallRunner(144): callId: 207 service: ClientService methodName: Scan size: 147 connection: 148.251.75.209:38872 deadline: 1689354885083, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=46457 startCode=1689354809303. As of locationSeqNum=6. 2023-07-14 17:13:45,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:45,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:45,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 17:13:45,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:45,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:45,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:45,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:45,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:45,206 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-14 17:13:45,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testFailRemoveGroup 2023-07-14 17:13:45,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-14 17:13:45,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-14 17:13:45,210 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354825210"}]},"ts":"1689354825210"} 2023-07-14 17:13:45,212 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-14 17:13:45,213 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-14 17:13:45,214 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, UNASSIGN}] 2023-07-14 17:13:45,216 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, UNASSIGN 2023-07-14 17:13:45,217 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=ec9f9dcd90f2bb9523374ddd5e2a5470, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:45,217 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689354825217"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354825217"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354825217"}]},"ts":"1689354825217"} 2023-07-14 17:13:45,219 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure ec9f9dcd90f2bb9523374ddd5e2a5470, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:45,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-14 17:13:45,371 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:45,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing ec9f9dcd90f2bb9523374ddd5e2a5470, disabling compactions & flushes 2023-07-14 17:13:45,372 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:45,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:45,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. after waiting 0 ms 2023-07-14 17:13:45,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:45,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-14 17:13:45,378 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470. 2023-07-14 17:13:45,378 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for ec9f9dcd90f2bb9523374ddd5e2a5470: 2023-07-14 17:13:45,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:45,380 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=ec9f9dcd90f2bb9523374ddd5e2a5470, regionState=CLOSED 2023-07-14 17:13:45,381 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689354825380"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354825380"}]},"ts":"1689354825380"} 2023-07-14 17:13:45,384 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-14 17:13:45,384 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure ec9f9dcd90f2bb9523374ddd5e2a5470, server=jenkins-hbase20.apache.org,46457,1689354809303 in 163 msec 2023-07-14 17:13:45,385 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-14 17:13:45,386 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ec9f9dcd90f2bb9523374ddd5e2a5470, UNASSIGN in 170 msec 2023-07-14 17:13:45,386 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354825386"}]},"ts":"1689354825386"} 2023-07-14 17:13:45,388 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-14 17:13:45,389 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-14 17:13:45,396 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 189 msec 2023-07-14 17:13:45,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-14 17:13:45,513 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-14 17:13:45,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete Group_testFailRemoveGroup 2023-07-14 17:13:45,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-14 17:13:45,517 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-14 17:13:45,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-14 17:13:45,517 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-14 17:13:45,522 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:45,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:45,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:45,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:45,524 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/recovered.edits] 2023-07-14 17:13:45,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-14 17:13:45,531 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/recovered.edits/10.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470/recovered.edits/10.seqid 2023-07-14 17:13:45,531 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testFailRemoveGroup/ec9f9dcd90f2bb9523374ddd5e2a5470 2023-07-14 17:13:45,531 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-14 17:13:45,534 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-14 17:13:45,536 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-14 17:13:45,538 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-14 17:13:45,539 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-14 17:13:45,539 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-14 17:13:45,539 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354825539"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:45,541 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-14 17:13:45,541 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ec9f9dcd90f2bb9523374ddd5e2a5470, NAME => 'Group_testFailRemoveGroup,,1689354822263.ec9f9dcd90f2bb9523374ddd5e2a5470.', STARTKEY => '', ENDKEY => ''}] 2023-07-14 17:13:45,541 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-14 17:13:45,541 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689354825541"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:45,543 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-14 17:13:45,545 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-14 17:13:45,546 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 31 msec 2023-07-14 17:13:45,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-14 17:13:45,631 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-14 17:13:45,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:45,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:45,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:45,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:45,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:45,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:45,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:45,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:45,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:45,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:45,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:45,646 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:45,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:45,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:45,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:45,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:45,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:45,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:45,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:45,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:45,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:45,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 343 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356025669, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:45,670 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:45,672 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:45,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:45,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:45,674 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:45,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:45,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:45,692 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=520 (was 505) Potentially hanging thread: hconnection-0x523b891-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-103047219-148.251.75.209-1689354803494:blk_1073741860_1036, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1280957736_17 at /127.0.0.1:60540 [Receiving block BP-103047219-148.251.75.209-1689354803494:blk_1073741860_1036] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x523b891-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1280957736_17 at /127.0.0.1:48994 [Receiving block BP-103047219-148.251.75.209-1689354803494:blk_1073741860_1036] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x523b891-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a-prefix:jenkins-hbase20.apache.org,46457,1689354809303.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1280957736_17 at /127.0.0.1:45440 [Receiving block BP-103047219-148.251.75.209-1689354803494:blk_1073741860_1036] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1974583233_17 at /127.0.0.1:60582 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-103047219-148.251.75.209-1689354803494:blk_1073741860_1036, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x37307bc-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1280957736_17 at /127.0.0.1:45452 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x523b891-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1974583233_17 at /127.0.0.1:54110 [Waiting for operation #20] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x523b891-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x523b891-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-103047219-148.251.75.209-1689354803494:blk_1073741860_1036, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=815 (was 797) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=544 (was 574), ProcessCount=173 (was 173), AvailableMemoryMB=3677 (was 3773) 2023-07-14 17:13:45,692 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-14 17:13:45,708 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=520, OpenFileDescriptor=815, MaxFileDescriptor=60000, SystemLoadAverage=544, ProcessCount=173, AvailableMemoryMB=3676 2023-07-14 17:13:45,708 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-14 17:13:45,708 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-14 17:13:45,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:45,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:45,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:45,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:45,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:45,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:45,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:45,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:45,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:45,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:45,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:45,724 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:45,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:45,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:45,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:45,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:45,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:45,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:45,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:45,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:45,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:45,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 371 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356025736, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:45,737 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:45,741 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:45,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:45,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:45,743 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:45,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:45,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:45,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:45,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:45,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_testMultiTableMove_225331531 2023-07-14 17:13:45,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_225331531 2023-07-14 17:13:45,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:45,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:45,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:45,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:45,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:45,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:45,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38517] to rsgroup Group_testMultiTableMove_225331531 2023-07-14 17:13:45,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_225331531 2023-07-14 17:13:45,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:45,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:45,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:45,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-14 17:13:45,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,38517,1689354813230] are moved back to default 2023-07-14 17:13:45,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_225331531 2023-07-14 17:13:45,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:45,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:45,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:45,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testMultiTableMove_225331531 2023-07-14 17:13:45,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:45,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:13:45,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 17:13:45,781 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:13:45,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-14 17:13:45,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-14 17:13:45,783 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_225331531 2023-07-14 17:13:45,784 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:45,784 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:45,784 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:45,787 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:13:45,789 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:45,790 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433 empty. 2023-07-14 17:13:45,791 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:45,791 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-14 17:13:45,808 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-14 17:13:45,809 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => f519d4be975ba1421a3d8bd73b005433, NAME => 'GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:45,829 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:45,829 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing f519d4be975ba1421a3d8bd73b005433, disabling compactions & flushes 2023-07-14 17:13:45,829 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:45,829 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:45,829 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. after waiting 0 ms 2023-07-14 17:13:45,829 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:45,829 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:45,829 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for f519d4be975ba1421a3d8bd73b005433: 2023-07-14 17:13:45,832 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:13:45,833 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354825833"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354825833"}]},"ts":"1689354825833"} 2023-07-14 17:13:45,835 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:13:45,836 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:13:45,836 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354825836"}]},"ts":"1689354825836"} 2023-07-14 17:13:45,838 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-14 17:13:45,840 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:45,840 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:45,841 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:45,841 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:13:45,841 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:45,841 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f519d4be975ba1421a3d8bd73b005433, ASSIGN}] 2023-07-14 17:13:45,843 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f519d4be975ba1421a3d8bd73b005433, ASSIGN 2023-07-14 17:13:45,844 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f519d4be975ba1421a3d8bd73b005433, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44093,1689354809062; forceNewPlan=false, retain=false 2023-07-14 17:13:45,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-14 17:13:45,995 INFO [jenkins-hbase20:41281] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 17:13:45,996 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=f519d4be975ba1421a3d8bd73b005433, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:45,996 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354825996"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354825996"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354825996"}]},"ts":"1689354825996"} 2023-07-14 17:13:45,998 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure f519d4be975ba1421a3d8bd73b005433, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:46,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-14 17:13:46,153 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:46,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f519d4be975ba1421a3d8bd73b005433, NAME => 'GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:46,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:46,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:46,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:46,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:46,156 INFO [StoreOpener-f519d4be975ba1421a3d8bd73b005433-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:46,157 DEBUG [StoreOpener-f519d4be975ba1421a3d8bd73b005433-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433/f 2023-07-14 17:13:46,157 DEBUG [StoreOpener-f519d4be975ba1421a3d8bd73b005433-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433/f 2023-07-14 17:13:46,158 INFO [StoreOpener-f519d4be975ba1421a3d8bd73b005433-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f519d4be975ba1421a3d8bd73b005433 columnFamilyName f 2023-07-14 17:13:46,159 INFO [StoreOpener-f519d4be975ba1421a3d8bd73b005433-1] regionserver.HStore(310): Store=f519d4be975ba1421a3d8bd73b005433/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:46,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:46,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:46,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:46,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:46,165 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f519d4be975ba1421a3d8bd73b005433; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11971589600, jitterRate=0.11494116485118866}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:46,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f519d4be975ba1421a3d8bd73b005433: 2023-07-14 17:13:46,166 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433., pid=96, masterSystemTime=1689354826150 2023-07-14 17:13:46,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:46,168 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:46,168 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=f519d4be975ba1421a3d8bd73b005433, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:46,168 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354826168"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354826168"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354826168"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354826168"}]},"ts":"1689354826168"} 2023-07-14 17:13:46,171 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-14 17:13:46,171 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure f519d4be975ba1421a3d8bd73b005433, server=jenkins-hbase20.apache.org,44093,1689354809062 in 171 msec 2023-07-14 17:13:46,172 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-14 17:13:46,172 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f519d4be975ba1421a3d8bd73b005433, ASSIGN in 330 msec 2023-07-14 17:13:46,173 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:13:46,173 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354826173"}]},"ts":"1689354826173"} 2023-07-14 17:13:46,175 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-14 17:13:46,177 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:13:46,178 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 400 msec 2023-07-14 17:13:46,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-14 17:13:46,387 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-14 17:13:46,387 DEBUG [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-14 17:13:46,387 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:46,406 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-14 17:13:46,407 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:46,407 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-14 17:13:46,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:13:46,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 17:13:46,413 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:13:46,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-14 17:13:46,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-14 17:13:46,417 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_225331531 2023-07-14 17:13:46,417 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:46,417 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:46,418 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:46,420 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:13:46,423 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:46,423 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df empty. 2023-07-14 17:13:46,424 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:46,424 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-14 17:13:46,516 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-14 17:13:46,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-14 17:13:46,523 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => ee258ff2fccf29052a852a46e5a879df, NAME => 'GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:46,598 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:46,598 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing ee258ff2fccf29052a852a46e5a879df, disabling compactions & flushes 2023-07-14 17:13:46,598 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:46,598 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:46,598 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. after waiting 0 ms 2023-07-14 17:13:46,598 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:46,598 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:46,598 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for ee258ff2fccf29052a852a46e5a879df: 2023-07-14 17:13:46,601 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:13:46,602 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354826602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354826602"}]},"ts":"1689354826602"} 2023-07-14 17:13:46,604 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:13:46,605 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:13:46,606 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354826606"}]},"ts":"1689354826606"} 2023-07-14 17:13:46,612 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-14 17:13:46,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-14 17:13:46,813 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:46,814 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:46,814 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:46,814 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:13:46,814 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:46,815 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ee258ff2fccf29052a852a46e5a879df, ASSIGN}] 2023-07-14 17:13:46,820 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ee258ff2fccf29052a852a46e5a879df, ASSIGN 2023-07-14 17:13:46,822 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ee258ff2fccf29052a852a46e5a879df, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46457,1689354809303; forceNewPlan=false, retain=false 2023-07-14 17:13:46,973 INFO [jenkins-hbase20:41281] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 17:13:46,974 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=ee258ff2fccf29052a852a46e5a879df, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:46,974 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354826974"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354826974"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354826974"}]},"ts":"1689354826974"} 2023-07-14 17:13:46,976 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure ee258ff2fccf29052a852a46e5a879df, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:47,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-14 17:13:47,133 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:47,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ee258ff2fccf29052a852a46e5a879df, NAME => 'GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:47,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:47,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:47,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:47,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:47,138 INFO [StoreOpener-ee258ff2fccf29052a852a46e5a879df-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:47,143 DEBUG [StoreOpener-ee258ff2fccf29052a852a46e5a879df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df/f 2023-07-14 17:13:47,143 DEBUG [StoreOpener-ee258ff2fccf29052a852a46e5a879df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df/f 2023-07-14 17:13:47,144 INFO [StoreOpener-ee258ff2fccf29052a852a46e5a879df-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ee258ff2fccf29052a852a46e5a879df columnFamilyName f 2023-07-14 17:13:47,147 INFO [StoreOpener-ee258ff2fccf29052a852a46e5a879df-1] regionserver.HStore(310): Store=ee258ff2fccf29052a852a46e5a879df/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:47,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:47,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:47,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:47,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:47,161 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened ee258ff2fccf29052a852a46e5a879df; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11352984000, jitterRate=0.057329028844833374}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:47,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for ee258ff2fccf29052a852a46e5a879df: 2023-07-14 17:13:47,162 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df., pid=99, masterSystemTime=1689354827128 2023-07-14 17:13:47,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:47,165 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:47,165 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=ee258ff2fccf29052a852a46e5a879df, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:47,165 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354827165"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354827165"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354827165"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354827165"}]},"ts":"1689354827165"} 2023-07-14 17:13:47,170 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-14 17:13:47,170 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure ee258ff2fccf29052a852a46e5a879df, server=jenkins-hbase20.apache.org,46457,1689354809303 in 191 msec 2023-07-14 17:13:47,173 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-14 17:13:47,173 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ee258ff2fccf29052a852a46e5a879df, ASSIGN in 356 msec 2023-07-14 17:13:47,181 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:13:47,181 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354827181"}]},"ts":"1689354827181"} 2023-07-14 17:13:47,183 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-14 17:13:47,199 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:13:47,202 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 790 msec 2023-07-14 17:13:47,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-14 17:13:47,610 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-14 17:13:47,610 DEBUG [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-14 17:13:47,611 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:47,617 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-14 17:13:47,618 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:47,618 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-14 17:13:47,619 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:47,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-14 17:13:47,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:47,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-14 17:13:47,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:47,637 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_225331531 2023-07-14 17:13:47,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_225331531 2023-07-14 17:13:47,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_225331531 2023-07-14 17:13:47,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:47,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:47,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:47,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_225331531 2023-07-14 17:13:47,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(345): Moving region ee258ff2fccf29052a852a46e5a879df to RSGroup Group_testMultiTableMove_225331531 2023-07-14 17:13:47,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ee258ff2fccf29052a852a46e5a879df, REOPEN/MOVE 2023-07-14 17:13:47,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_225331531 2023-07-14 17:13:47,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(345): Moving region f519d4be975ba1421a3d8bd73b005433 to RSGroup Group_testMultiTableMove_225331531 2023-07-14 17:13:47,649 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ee258ff2fccf29052a852a46e5a879df, REOPEN/MOVE 2023-07-14 17:13:47,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f519d4be975ba1421a3d8bd73b005433, REOPEN/MOVE 2023-07-14 17:13:47,650 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=ee258ff2fccf29052a852a46e5a879df, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:47,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_225331531, current retry=0 2023-07-14 17:13:47,651 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f519d4be975ba1421a3d8bd73b005433, REOPEN/MOVE 2023-07-14 17:13:47,651 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354827650"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354827650"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354827650"}]},"ts":"1689354827650"} 2023-07-14 17:13:47,653 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure ee258ff2fccf29052a852a46e5a879df, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:47,661 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=f519d4be975ba1421a3d8bd73b005433, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:47,661 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354827660"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354827660"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354827660"}]},"ts":"1689354827660"} 2023-07-14 17:13:47,663 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure f519d4be975ba1421a3d8bd73b005433, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:47,813 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:47,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing ee258ff2fccf29052a852a46e5a879df, disabling compactions & flushes 2023-07-14 17:13:47,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:47,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:47,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. after waiting 0 ms 2023-07-14 17:13:47,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:47,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:47,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f519d4be975ba1421a3d8bd73b005433, disabling compactions & flushes 2023-07-14 17:13:47,819 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:47,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:47,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. after waiting 0 ms 2023-07-14 17:13:47,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:47,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:47,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:47,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for ee258ff2fccf29052a852a46e5a879df: 2023-07-14 17:13:47,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding ee258ff2fccf29052a852a46e5a879df move to jenkins-hbase20.apache.org,38517,1689354813230 record at close sequenceid=2 2023-07-14 17:13:47,823 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:47,824 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=ee258ff2fccf29052a852a46e5a879df, regionState=CLOSED 2023-07-14 17:13:47,824 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354827824"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354827824"}]},"ts":"1689354827824"} 2023-07-14 17:13:47,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:47,827 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-14 17:13:47,827 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure ee258ff2fccf29052a852a46e5a879df, server=jenkins-hbase20.apache.org,46457,1689354809303 in 173 msec 2023-07-14 17:13:47,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:47,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f519d4be975ba1421a3d8bd73b005433: 2023-07-14 17:13:47,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding f519d4be975ba1421a3d8bd73b005433 move to jenkins-hbase20.apache.org,38517,1689354813230 record at close sequenceid=2 2023-07-14 17:13:47,828 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ee258ff2fccf29052a852a46e5a879df, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,38517,1689354813230; forceNewPlan=false, retain=false 2023-07-14 17:13:47,829 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:47,830 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=f519d4be975ba1421a3d8bd73b005433, regionState=CLOSED 2023-07-14 17:13:47,830 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354827829"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354827829"}]},"ts":"1689354827829"} 2023-07-14 17:13:47,833 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-14 17:13:47,833 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure f519d4be975ba1421a3d8bd73b005433, server=jenkins-hbase20.apache.org,44093,1689354809062 in 168 msec 2023-07-14 17:13:47,833 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f519d4be975ba1421a3d8bd73b005433, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,38517,1689354813230; forceNewPlan=false, retain=false 2023-07-14 17:13:47,979 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=ee258ff2fccf29052a852a46e5a879df, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:47,979 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=f519d4be975ba1421a3d8bd73b005433, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:47,979 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354827979"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354827979"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354827979"}]},"ts":"1689354827979"} 2023-07-14 17:13:47,979 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354827979"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354827979"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354827979"}]},"ts":"1689354827979"} 2023-07-14 17:13:47,980 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=100, state=RUNNABLE; OpenRegionProcedure ee258ff2fccf29052a852a46e5a879df, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:47,981 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=101, state=RUNNABLE; OpenRegionProcedure f519d4be975ba1421a3d8bd73b005433, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:48,135 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:48,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ee258ff2fccf29052a852a46e5a879df, NAME => 'GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:48,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:48,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:48,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:48,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:48,137 INFO [StoreOpener-ee258ff2fccf29052a852a46e5a879df-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:48,138 DEBUG [StoreOpener-ee258ff2fccf29052a852a46e5a879df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df/f 2023-07-14 17:13:48,138 DEBUG [StoreOpener-ee258ff2fccf29052a852a46e5a879df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df/f 2023-07-14 17:13:48,139 INFO [StoreOpener-ee258ff2fccf29052a852a46e5a879df-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ee258ff2fccf29052a852a46e5a879df columnFamilyName f 2023-07-14 17:13:48,140 INFO [StoreOpener-ee258ff2fccf29052a852a46e5a879df-1] regionserver.HStore(310): Store=ee258ff2fccf29052a852a46e5a879df/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:48,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:48,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:48,145 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:48,146 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened ee258ff2fccf29052a852a46e5a879df; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11082776960, jitterRate=0.032164037227630615}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:48,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for ee258ff2fccf29052a852a46e5a879df: 2023-07-14 17:13:48,147 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df., pid=104, masterSystemTime=1689354828132 2023-07-14 17:13:48,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:48,149 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:48,149 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:48,149 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=ee258ff2fccf29052a852a46e5a879df, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:48,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f519d4be975ba1421a3d8bd73b005433, NAME => 'GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:48,149 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354828149"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354828149"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354828149"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354828149"}]},"ts":"1689354828149"} 2023-07-14 17:13:48,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:48,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:48,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:48,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:48,152 INFO [StoreOpener-f519d4be975ba1421a3d8bd73b005433-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:48,152 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=100 2023-07-14 17:13:48,152 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=100, state=SUCCESS; OpenRegionProcedure ee258ff2fccf29052a852a46e5a879df, server=jenkins-hbase20.apache.org,38517,1689354813230 in 171 msec 2023-07-14 17:13:48,153 DEBUG [StoreOpener-f519d4be975ba1421a3d8bd73b005433-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433/f 2023-07-14 17:13:48,153 DEBUG [StoreOpener-f519d4be975ba1421a3d8bd73b005433-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433/f 2023-07-14 17:13:48,154 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ee258ff2fccf29052a852a46e5a879df, REOPEN/MOVE in 506 msec 2023-07-14 17:13:48,154 INFO [StoreOpener-f519d4be975ba1421a3d8bd73b005433-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f519d4be975ba1421a3d8bd73b005433 columnFamilyName f 2023-07-14 17:13:48,154 INFO [StoreOpener-f519d4be975ba1421a3d8bd73b005433-1] regionserver.HStore(310): Store=f519d4be975ba1421a3d8bd73b005433/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:48,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:48,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:48,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:48,163 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f519d4be975ba1421a3d8bd73b005433; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9709870880, jitterRate=-0.0956978052854538}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:48,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f519d4be975ba1421a3d8bd73b005433: 2023-07-14 17:13:48,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433., pid=105, masterSystemTime=1689354828132 2023-07-14 17:13:48,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:48,165 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:48,166 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=f519d4be975ba1421a3d8bd73b005433, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:48,166 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354828166"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354828166"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354828166"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354828166"}]},"ts":"1689354828166"} 2023-07-14 17:13:48,169 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=101 2023-07-14 17:13:48,169 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=101, state=SUCCESS; OpenRegionProcedure f519d4be975ba1421a3d8bd73b005433, server=jenkins-hbase20.apache.org,38517,1689354813230 in 187 msec 2023-07-14 17:13:48,170 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f519d4be975ba1421a3d8bd73b005433, REOPEN/MOVE in 520 msec 2023-07-14 17:13:48,176 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-14 17:13:48,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-14 17:13:48,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_225331531. 2023-07-14 17:13:48,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:48,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:48,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:48,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-14 17:13:48,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:48,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-14 17:13:48,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:48,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:48,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:48,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testMultiTableMove_225331531 2023-07-14 17:13:48,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:48,663 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-14 17:13:48,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable GrouptestMultiTableMoveA 2023-07-14 17:13:48,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 17:13:48,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-14 17:13:48,671 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354828671"}]},"ts":"1689354828671"} 2023-07-14 17:13:48,672 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-14 17:13:48,674 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-14 17:13:48,677 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f519d4be975ba1421a3d8bd73b005433, UNASSIGN}] 2023-07-14 17:13:48,680 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f519d4be975ba1421a3d8bd73b005433, UNASSIGN 2023-07-14 17:13:48,681 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=f519d4be975ba1421a3d8bd73b005433, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:48,681 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354828681"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354828681"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354828681"}]},"ts":"1689354828681"} 2023-07-14 17:13:48,682 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure f519d4be975ba1421a3d8bd73b005433, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:48,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-14 17:13:48,835 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:48,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f519d4be975ba1421a3d8bd73b005433, disabling compactions & flushes 2023-07-14 17:13:48,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:48,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:48,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. after waiting 0 ms 2023-07-14 17:13:48,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:48,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 17:13:48,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433. 2023-07-14 17:13:48,841 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f519d4be975ba1421a3d8bd73b005433: 2023-07-14 17:13:48,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:48,843 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=f519d4be975ba1421a3d8bd73b005433, regionState=CLOSED 2023-07-14 17:13:48,843 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354828843"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354828843"}]},"ts":"1689354828843"} 2023-07-14 17:13:48,846 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-14 17:13:48,846 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure f519d4be975ba1421a3d8bd73b005433, server=jenkins-hbase20.apache.org,38517,1689354813230 in 163 msec 2023-07-14 17:13:48,847 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-14 17:13:48,847 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f519d4be975ba1421a3d8bd73b005433, UNASSIGN in 172 msec 2023-07-14 17:13:48,848 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354828848"}]},"ts":"1689354828848"} 2023-07-14 17:13:48,849 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-14 17:13:48,850 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-14 17:13:48,852 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 188 msec 2023-07-14 17:13:48,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-14 17:13:48,974 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-14 17:13:48,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete GrouptestMultiTableMoveA 2023-07-14 17:13:48,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 17:13:48,978 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 17:13:48,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_225331531' 2023-07-14 17:13:48,979 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 17:13:48,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_225331531 2023-07-14 17:13:48,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:48,986 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:48,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:48,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:48,988 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433/recovered.edits] 2023-07-14 17:13:48,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-14 17:13:48,993 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433/recovered.edits/7.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433/recovered.edits/7.seqid 2023-07-14 17:13:48,994 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveA/f519d4be975ba1421a3d8bd73b005433 2023-07-14 17:13:48,994 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-14 17:13:48,996 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 17:13:48,999 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-14 17:13:49,003 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-14 17:13:49,005 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 17:13:49,005 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-14 17:13:49,005 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354829005"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:49,007 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-14 17:13:49,007 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f519d4be975ba1421a3d8bd73b005433, NAME => 'GrouptestMultiTableMoveA,,1689354825777.f519d4be975ba1421a3d8bd73b005433.', STARTKEY => '', ENDKEY => ''}] 2023-07-14 17:13:49,007 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-14 17:13:49,007 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689354829007"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:49,016 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-14 17:13:49,019 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 17:13:49,020 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 44 msec 2023-07-14 17:13:49,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-14 17:13:49,094 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-14 17:13:49,094 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-14 17:13:49,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable GrouptestMultiTableMoveB 2023-07-14 17:13:49,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 17:13:49,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-14 17:13:49,101 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354829101"}]},"ts":"1689354829101"} 2023-07-14 17:13:49,105 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-14 17:13:49,108 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-14 17:13:49,111 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ee258ff2fccf29052a852a46e5a879df, UNASSIGN}] 2023-07-14 17:13:49,113 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ee258ff2fccf29052a852a46e5a879df, UNASSIGN 2023-07-14 17:13:49,113 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=ee258ff2fccf29052a852a46e5a879df, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:49,113 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354829113"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354829113"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354829113"}]},"ts":"1689354829113"} 2023-07-14 17:13:49,115 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure ee258ff2fccf29052a852a46e5a879df, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:49,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-14 17:13:49,267 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:49,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing ee258ff2fccf29052a852a46e5a879df, disabling compactions & flushes 2023-07-14 17:13:49,269 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:49,269 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:49,269 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. after waiting 0 ms 2023-07-14 17:13:49,269 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:49,273 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 17:13:49,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df. 2023-07-14 17:13:49,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for ee258ff2fccf29052a852a46e5a879df: 2023-07-14 17:13:49,276 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:49,277 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=ee258ff2fccf29052a852a46e5a879df, regionState=CLOSED 2023-07-14 17:13:49,277 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689354829276"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354829276"}]},"ts":"1689354829276"} 2023-07-14 17:13:49,280 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-14 17:13:49,280 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure ee258ff2fccf29052a852a46e5a879df, server=jenkins-hbase20.apache.org,38517,1689354813230 in 163 msec 2023-07-14 17:13:49,281 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-14 17:13:49,281 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ee258ff2fccf29052a852a46e5a879df, UNASSIGN in 169 msec 2023-07-14 17:13:49,281 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354829281"}]},"ts":"1689354829281"} 2023-07-14 17:13:49,283 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-14 17:13:49,284 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-14 17:13:49,285 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 190 msec 2023-07-14 17:13:49,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-14 17:13:49,403 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-14 17:13:49,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete GrouptestMultiTableMoveB 2023-07-14 17:13:49,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 17:13:49,406 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 17:13:49,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_225331531' 2023-07-14 17:13:49,406 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 17:13:49,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_225331531 2023-07-14 17:13:49,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:49,410 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:49,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-14 17:13:49,411 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df/recovered.edits] 2023-07-14 17:13:49,415 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df/recovered.edits/7.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df/recovered.edits/7.seqid 2023-07-14 17:13:49,416 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/GrouptestMultiTableMoveB/ee258ff2fccf29052a852a46e5a879df 2023-07-14 17:13:49,416 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-14 17:13:49,418 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 17:13:49,420 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-14 17:13:49,421 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-14 17:13:49,423 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 17:13:49,423 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-14 17:13:49,423 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354829423"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:49,424 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-14 17:13:49,424 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ee258ff2fccf29052a852a46e5a879df, NAME => 'GrouptestMultiTableMoveB,,1689354826409.ee258ff2fccf29052a852a46e5a879df.', STARTKEY => '', ENDKEY => ''}] 2023-07-14 17:13:49,424 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-14 17:13:49,424 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689354829424"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:49,426 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-14 17:13:49,427 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 17:13:49,428 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 24 msec 2023-07-14 17:13:49,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-14 17:13:49,512 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-14 17:13:49,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:49,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:49,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:49,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38517] to rsgroup default 2023-07-14 17:13:49,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_225331531 2023-07-14 17:13:49,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:49,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_225331531, current retry=0 2023-07-14 17:13:49,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,38517,1689354813230] are moved back to Group_testMultiTableMove_225331531 2023-07-14 17:13:49,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_225331531 => default 2023-07-14 17:13:49,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:49,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_testMultiTableMove_225331531 2023-07-14 17:13:49,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 17:13:49,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:49,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:49,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:49,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:49,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:49,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:49,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:49,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:49,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:49,551 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:49,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:49,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:49,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:49,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:49,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:49,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 511 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356029568, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:49,569 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:49,571 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:49,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,573 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:49,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:49,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:49,598 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=516 (was 520), OpenFileDescriptor=811 (was 815), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=525 (was 544), ProcessCount=173 (was 173), AvailableMemoryMB=4019 (was 3676) - AvailableMemoryMB LEAK? - 2023-07-14 17:13:49,599 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-14 17:13:49,628 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=516, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=525, ProcessCount=173, AvailableMemoryMB=4015 2023-07-14 17:13:49,629 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-14 17:13:49,629 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-14 17:13:49,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:49,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:49,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:49,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:49,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:49,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:49,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:49,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:49,662 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:49,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:49,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:49,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:49,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,683 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:49,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:49,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] ipc.CallRunner(144): callId: 539 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356029683, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:49,684 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:49,686 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:49,687 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,688 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,688 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:49,689 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:49,689 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:49,691 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:49,692 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:49,693 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup oldGroup 2023-07-14 17:13:49,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 17:13:49,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:49,704 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:49,714 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,715 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,718 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361] to rsgroup oldGroup 2023-07-14 17:13:49,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 17:13:49,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:49,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-14 17:13:49,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,38517,1689354813230, jenkins-hbase20.apache.org,42361,1689354809221] are moved back to default 2023-07-14 17:13:49,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-14 17:13:49,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:49,738 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,738 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,741 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldGroup 2023-07-14 17:13:49,741 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:49,746 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldGroup 2023-07-14 17:13:49,747 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:49,748 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:49,748 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:49,749 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup anotherRSGroup 2023-07-14 17:13:49,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-14 17:13:49,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 17:13:49,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 17:13:49,761 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:49,765 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,765 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,769 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:44093] to rsgroup anotherRSGroup 2023-07-14 17:13:49,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-14 17:13:49,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 17:13:49,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 17:13:49,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-14 17:13:49,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,44093,1689354809062] are moved back to default 2023-07-14 17:13:49,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-14 17:13:49,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:49,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,782 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-14 17:13:49,782 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:49,783 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-14 17:13:49,783 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:49,788 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-14 17:13:49,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:49,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 113 connection: 148.251.75.209:33882 deadline: 1689356029788, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-14 17:13:49,790 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from oldGroup to anotherRSGroup 2023-07-14 17:13:49,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:49,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 106 connection: 148.251.75.209:33882 deadline: 1689356029790, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-14 17:13:49,792 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from default to newRSGroup2 2023-07-14 17:13:49,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:49,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 102 connection: 148.251.75.209:33882 deadline: 1689356029792, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-14 17:13:49,793 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from oldGroup to default 2023-07-14 17:13:49,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:49,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] ipc.CallRunner(144): callId: 579 service: MasterService methodName: ExecMasterService size: 99 connection: 148.251.75.209:33882 deadline: 1689356029793, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-14 17:13:49,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:49,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:49,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:49,801 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:44093] to rsgroup default 2023-07-14 17:13:49,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-14 17:13:49,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 17:13:49,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 17:13:49,812 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-14 17:13:49,813 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,44093,1689354809062] are moved back to anotherRSGroup 2023-07-14 17:13:49,813 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-14 17:13:49,813 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:49,814 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup anotherRSGroup 2023-07-14 17:13:49,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 17:13:49,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-14 17:13:49,826 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:49,827 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:49,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:49,827 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:49,830 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361] to rsgroup default 2023-07-14 17:13:49,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 17:13:49,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:49,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-14 17:13:49,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,38517,1689354813230, jenkins-hbase20.apache.org,42361,1689354809221] are moved back to oldGroup 2023-07-14 17:13:49,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-14 17:13:49,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:49,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup oldGroup 2023-07-14 17:13:49,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 17:13:49,842 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:49,853 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:49,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:49,853 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:49,854 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:49,854 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:49,855 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:49,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:49,868 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:49,871 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:49,872 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:49,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:49,883 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:49,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,890 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:49,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:49,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] ipc.CallRunner(144): callId: 615 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356029890, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:49,890 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:49,892 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:49,893 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,893 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,894 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:49,894 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:49,894 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:49,913 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=518 (was 516) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=809 (was 811), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=525 (was 525), ProcessCount=173 (was 173), AvailableMemoryMB=4031 (was 4015) - AvailableMemoryMB LEAK? - 2023-07-14 17:13:49,914 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=518 is superior to 500 2023-07-14 17:13:49,931 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=518, OpenFileDescriptor=809, MaxFileDescriptor=60000, SystemLoadAverage=525, ProcessCount=173, AvailableMemoryMB=4028 2023-07-14 17:13:49,932 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=518 is superior to 500 2023-07-14 17:13:49,932 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-14 17:13:49,937 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,937 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,938 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:49,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:49,938 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:49,939 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:49,939 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:49,944 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:49,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:49,950 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:49,957 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:49,958 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:49,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:49,966 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:49,970 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,970 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:49,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:49,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 643 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356029973, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:49,974 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:49,976 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:49,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,977 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:49,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:49,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:49,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:49,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:49,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup oldgroup 2023-07-14 17:13:49,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 17:13:49,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:49,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:49,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:49,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:49,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361] to rsgroup oldgroup 2023-07-14 17:13:49,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 17:13:49,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:49,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:49,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:50,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-14 17:13:50,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,38517,1689354813230, jenkins-hbase20.apache.org,42361,1689354809221] are moved back to default 2023-07-14 17:13:50,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-14 17:13:50,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:50,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:50,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:50,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldgroup 2023-07-14 17:13:50,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:50,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:13:50,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-14 17:13:50,019 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:13:50,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-14 17:13:50,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-14 17:13:50,021 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 17:13:50,022 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:50,022 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:50,023 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:50,031 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:13:50,033 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/testRename/30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:50,034 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/testRename/30ac29c4df468c2d4c926ec109f650db empty. 2023-07-14 17:13:50,034 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/testRename/30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:50,034 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-14 17:13:50,070 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-14 17:13:50,072 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 30ac29c4df468c2d4c926ec109f650db, NAME => 'testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:50,084 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:50,084 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 30ac29c4df468c2d4c926ec109f650db, disabling compactions & flushes 2023-07-14 17:13:50,084 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:50,084 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:50,084 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. after waiting 0 ms 2023-07-14 17:13:50,084 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:50,084 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:50,084 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 30ac29c4df468c2d4c926ec109f650db: 2023-07-14 17:13:50,086 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:13:50,087 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689354830087"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354830087"}]},"ts":"1689354830087"} 2023-07-14 17:13:50,091 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:13:50,092 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:13:50,093 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354830093"}]},"ts":"1689354830093"} 2023-07-14 17:13:50,094 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-14 17:13:50,096 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:50,097 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:50,097 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:50,097 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:50,097 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=30ac29c4df468c2d4c926ec109f650db, ASSIGN}] 2023-07-14 17:13:50,099 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=30ac29c4df468c2d4c926ec109f650db, ASSIGN 2023-07-14 17:13:50,102 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=30ac29c4df468c2d4c926ec109f650db, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44093,1689354809062; forceNewPlan=false, retain=false 2023-07-14 17:13:50,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-14 17:13:50,253 INFO [jenkins-hbase20:41281] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 17:13:50,254 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=30ac29c4df468c2d4c926ec109f650db, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:50,254 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689354830254"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354830254"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354830254"}]},"ts":"1689354830254"} 2023-07-14 17:13:50,256 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure 30ac29c4df468c2d4c926ec109f650db, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:50,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-14 17:13:50,411 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:50,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 30ac29c4df468c2d4c926ec109f650db, NAME => 'testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:50,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:50,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:50,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:50,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:50,414 INFO [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:50,416 DEBUG [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db/tr 2023-07-14 17:13:50,416 DEBUG [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db/tr 2023-07-14 17:13:50,417 INFO [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 30ac29c4df468c2d4c926ec109f650db columnFamilyName tr 2023-07-14 17:13:50,417 INFO [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] regionserver.HStore(310): Store=30ac29c4df468c2d4c926ec109f650db/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:50,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:50,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:50,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:50,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:50,424 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 30ac29c4df468c2d4c926ec109f650db; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11454591840, jitterRate=0.06679199635982513}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:50,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 30ac29c4df468c2d4c926ec109f650db: 2023-07-14 17:13:50,425 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db., pid=116, masterSystemTime=1689354830408 2023-07-14 17:13:50,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:50,426 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:50,427 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=30ac29c4df468c2d4c926ec109f650db, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:50,427 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689354830427"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354830427"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354830427"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354830427"}]},"ts":"1689354830427"} 2023-07-14 17:13:50,430 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-14 17:13:50,430 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure 30ac29c4df468c2d4c926ec109f650db, server=jenkins-hbase20.apache.org,44093,1689354809062 in 172 msec 2023-07-14 17:13:50,440 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-14 17:13:50,440 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=30ac29c4df468c2d4c926ec109f650db, ASSIGN in 333 msec 2023-07-14 17:13:50,440 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:13:50,440 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354830440"}]},"ts":"1689354830440"} 2023-07-14 17:13:50,442 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-14 17:13:50,450 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:13:50,456 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 434 msec 2023-07-14 17:13:50,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-14 17:13:50,626 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-14 17:13:50,626 DEBUG [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-14 17:13:50,626 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:50,630 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-14 17:13:50,631 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:50,631 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-14 17:13:50,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [testRename] to rsgroup oldgroup 2023-07-14 17:13:50,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 17:13:50,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:50,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:50,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:50,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-14 17:13:50,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(345): Moving region 30ac29c4df468c2d4c926ec109f650db to RSGroup oldgroup 2023-07-14 17:13:50,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:50,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:50,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:50,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:13:50,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:50,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=30ac29c4df468c2d4c926ec109f650db, REOPEN/MOVE 2023-07-14 17:13:50,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-14 17:13:50,646 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=30ac29c4df468c2d4c926ec109f650db, REOPEN/MOVE 2023-07-14 17:13:50,647 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=30ac29c4df468c2d4c926ec109f650db, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:50,647 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689354830647"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354830647"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354830647"}]},"ts":"1689354830647"} 2023-07-14 17:13:50,648 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 30ac29c4df468c2d4c926ec109f650db, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:50,801 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:50,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 30ac29c4df468c2d4c926ec109f650db, disabling compactions & flushes 2023-07-14 17:13:50,802 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:50,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:50,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. after waiting 0 ms 2023-07-14 17:13:50,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:50,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:50,806 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:50,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 30ac29c4df468c2d4c926ec109f650db: 2023-07-14 17:13:50,807 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 30ac29c4df468c2d4c926ec109f650db move to jenkins-hbase20.apache.org,38517,1689354813230 record at close sequenceid=2 2023-07-14 17:13:50,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:50,809 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=30ac29c4df468c2d4c926ec109f650db, regionState=CLOSED 2023-07-14 17:13:50,809 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689354830809"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354830809"}]},"ts":"1689354830809"} 2023-07-14 17:13:50,811 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-14 17:13:50,812 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 30ac29c4df468c2d4c926ec109f650db, server=jenkins-hbase20.apache.org,44093,1689354809062 in 162 msec 2023-07-14 17:13:50,812 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=30ac29c4df468c2d4c926ec109f650db, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,38517,1689354813230; forceNewPlan=false, retain=false 2023-07-14 17:13:50,962 INFO [jenkins-hbase20:41281] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 17:13:50,963 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=30ac29c4df468c2d4c926ec109f650db, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:50,963 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689354830963"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354830963"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354830963"}]},"ts":"1689354830963"} 2023-07-14 17:13:50,966 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 30ac29c4df468c2d4c926ec109f650db, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:51,123 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:51,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 30ac29c4df468c2d4c926ec109f650db, NAME => 'testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:51,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:51,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:51,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:51,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:51,125 INFO [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:51,127 DEBUG [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db/tr 2023-07-14 17:13:51,127 DEBUG [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db/tr 2023-07-14 17:13:51,128 INFO [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 30ac29c4df468c2d4c926ec109f650db columnFamilyName tr 2023-07-14 17:13:51,129 INFO [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] regionserver.HStore(310): Store=30ac29c4df468c2d4c926ec109f650db/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:51,130 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:51,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:51,137 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:51,138 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 30ac29c4df468c2d4c926ec109f650db; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11084175840, jitterRate=0.032294318079948425}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:51,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 30ac29c4df468c2d4c926ec109f650db: 2023-07-14 17:13:51,139 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db., pid=119, masterSystemTime=1689354831118 2023-07-14 17:13:51,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:51,143 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:51,147 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=30ac29c4df468c2d4c926ec109f650db, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:51,147 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689354831146"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354831146"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354831146"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354831146"}]},"ts":"1689354831146"} 2023-07-14 17:13:51,152 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-14 17:13:51,152 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 30ac29c4df468c2d4c926ec109f650db, server=jenkins-hbase20.apache.org,38517,1689354813230 in 183 msec 2023-07-14 17:13:51,153 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=30ac29c4df468c2d4c926ec109f650db, REOPEN/MOVE in 507 msec 2023-07-14 17:13:51,385 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-14 17:13:51,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-14 17:13:51,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-14 17:13:51,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:51,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:51,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:51,653 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:51,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=testRename 2023-07-14 17:13:51,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:51,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldgroup 2023-07-14 17:13:51,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:51,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=testRename 2023-07-14 17:13:51,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:51,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:51,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:51,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup normal 2023-07-14 17:13:51,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 17:13:51,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 17:13:51,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:51,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:51,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 17:13:51,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:51,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:51,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:51,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:44093] to rsgroup normal 2023-07-14 17:13:51,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 17:13:51,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 17:13:51,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:51,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:51,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 17:13:51,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-14 17:13:51,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,44093,1689354809062] are moved back to default 2023-07-14 17:13:51,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-14 17:13:51,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:51,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:51,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:51,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=normal 2023-07-14 17:13:51,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:51,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:13:51,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-14 17:13:51,713 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:13:51,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-14 17:13:51,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-14 17:13:51,716 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 17:13:51,716 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 17:13:51,717 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:51,723 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:51,724 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 17:13:51,726 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:13:51,728 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:51,728 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b empty. 2023-07-14 17:13:51,729 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:51,729 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-14 17:13:51,748 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-14 17:13:51,749 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 64ab5cc83481d09558e7d84f19c0e88b, NAME => 'unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:51,767 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:51,768 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 64ab5cc83481d09558e7d84f19c0e88b, disabling compactions & flushes 2023-07-14 17:13:51,768 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:51,768 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:51,768 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. after waiting 0 ms 2023-07-14 17:13:51,768 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:51,768 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:51,768 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 64ab5cc83481d09558e7d84f19c0e88b: 2023-07-14 17:13:51,770 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:13:51,771 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689354831770"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354831770"}]},"ts":"1689354831770"} 2023-07-14 17:13:51,772 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:13:51,773 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:13:51,773 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354831773"}]},"ts":"1689354831773"} 2023-07-14 17:13:51,774 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-14 17:13:51,776 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=64ab5cc83481d09558e7d84f19c0e88b, ASSIGN}] 2023-07-14 17:13:51,778 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=64ab5cc83481d09558e7d84f19c0e88b, ASSIGN 2023-07-14 17:13:51,783 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=64ab5cc83481d09558e7d84f19c0e88b, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46457,1689354809303; forceNewPlan=false, retain=false 2023-07-14 17:13:51,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-14 17:13:51,934 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=64ab5cc83481d09558e7d84f19c0e88b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:51,934 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689354831934"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354831934"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354831934"}]},"ts":"1689354831934"} 2023-07-14 17:13:51,936 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 64ab5cc83481d09558e7d84f19c0e88b, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:52,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-14 17:13:52,096 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:52,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 64ab5cc83481d09558e7d84f19c0e88b, NAME => 'unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:52,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:52,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,099 INFO [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,103 DEBUG [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b/ut 2023-07-14 17:13:52,103 DEBUG [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b/ut 2023-07-14 17:13:52,103 INFO [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 64ab5cc83481d09558e7d84f19c0e88b columnFamilyName ut 2023-07-14 17:13:52,105 INFO [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] regionserver.HStore(310): Store=64ab5cc83481d09558e7d84f19c0e88b/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:52,106 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,110 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:52,129 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 64ab5cc83481d09558e7d84f19c0e88b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10696099680, jitterRate=-0.0038480907678604126}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:52,129 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 64ab5cc83481d09558e7d84f19c0e88b: 2023-07-14 17:13:52,130 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b., pid=122, masterSystemTime=1689354832088 2023-07-14 17:13:52,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:52,132 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:52,132 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=64ab5cc83481d09558e7d84f19c0e88b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:52,132 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689354832132"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354832132"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354832132"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354832132"}]},"ts":"1689354832132"} 2023-07-14 17:13:52,137 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-14 17:13:52,137 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 64ab5cc83481d09558e7d84f19c0e88b, server=jenkins-hbase20.apache.org,46457,1689354809303 in 198 msec 2023-07-14 17:13:52,139 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-14 17:13:52,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=64ab5cc83481d09558e7d84f19c0e88b, ASSIGN in 361 msec 2023-07-14 17:13:52,141 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:13:52,141 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354832141"}]},"ts":"1689354832141"} 2023-07-14 17:13:52,142 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-14 17:13:52,144 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:13:52,163 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 435 msec 2023-07-14 17:13:52,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-14 17:13:52,319 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-14 17:13:52,319 DEBUG [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-14 17:13:52,319 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:52,326 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-14 17:13:52,327 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:52,327 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-14 17:13:52,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [unmovedTable] to rsgroup normal 2023-07-14 17:13:52,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 17:13:52,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 17:13:52,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:52,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:52,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 17:13:52,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-14 17:13:52,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(345): Moving region 64ab5cc83481d09558e7d84f19c0e88b to RSGroup normal 2023-07-14 17:13:52,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=64ab5cc83481d09558e7d84f19c0e88b, REOPEN/MOVE 2023-07-14 17:13:52,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-14 17:13:52,345 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=64ab5cc83481d09558e7d84f19c0e88b, REOPEN/MOVE 2023-07-14 17:13:52,346 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=64ab5cc83481d09558e7d84f19c0e88b, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:52,346 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689354832346"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354832346"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354832346"}]},"ts":"1689354832346"} 2023-07-14 17:13:52,347 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 64ab5cc83481d09558e7d84f19c0e88b, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:52,501 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 64ab5cc83481d09558e7d84f19c0e88b, disabling compactions & flushes 2023-07-14 17:13:52,502 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:52,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:52,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. after waiting 0 ms 2023-07-14 17:13:52,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:52,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:52,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:52,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 64ab5cc83481d09558e7d84f19c0e88b: 2023-07-14 17:13:52,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 64ab5cc83481d09558e7d84f19c0e88b move to jenkins-hbase20.apache.org,44093,1689354809062 record at close sequenceid=2 2023-07-14 17:13:52,510 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,510 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=64ab5cc83481d09558e7d84f19c0e88b, regionState=CLOSED 2023-07-14 17:13:52,511 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689354832510"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354832510"}]},"ts":"1689354832510"} 2023-07-14 17:13:52,513 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-14 17:13:52,514 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 64ab5cc83481d09558e7d84f19c0e88b, server=jenkins-hbase20.apache.org,46457,1689354809303 in 165 msec 2023-07-14 17:13:52,514 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=64ab5cc83481d09558e7d84f19c0e88b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44093,1689354809062; forceNewPlan=false, retain=false 2023-07-14 17:13:52,665 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=64ab5cc83481d09558e7d84f19c0e88b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:52,665 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689354832665"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354832665"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354832665"}]},"ts":"1689354832665"} 2023-07-14 17:13:52,667 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 64ab5cc83481d09558e7d84f19c0e88b, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:52,823 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:52,823 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 64ab5cc83481d09558e7d84f19c0e88b, NAME => 'unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:52,823 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,823 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:52,824 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,824 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,826 INFO [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,827 DEBUG [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b/ut 2023-07-14 17:13:52,827 DEBUG [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b/ut 2023-07-14 17:13:52,828 INFO [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 64ab5cc83481d09558e7d84f19c0e88b columnFamilyName ut 2023-07-14 17:13:52,829 INFO [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] regionserver.HStore(310): Store=64ab5cc83481d09558e7d84f19c0e88b/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:52,830 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:52,838 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 64ab5cc83481d09558e7d84f19c0e88b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9459507040, jitterRate=-0.11901475489139557}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:52,838 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 64ab5cc83481d09558e7d84f19c0e88b: 2023-07-14 17:13:52,838 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b., pid=125, masterSystemTime=1689354832818 2023-07-14 17:13:52,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:52,840 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:52,840 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=64ab5cc83481d09558e7d84f19c0e88b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:52,841 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689354832840"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354832840"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354832840"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354832840"}]},"ts":"1689354832840"} 2023-07-14 17:13:52,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-14 17:13:52,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 64ab5cc83481d09558e7d84f19c0e88b, server=jenkins-hbase20.apache.org,44093,1689354809062 in 175 msec 2023-07-14 17:13:52,844 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=64ab5cc83481d09558e7d84f19c0e88b, REOPEN/MOVE in 499 msec 2023-07-14 17:13:53,324 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-14 17:13:53,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-14 17:13:53,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-14 17:13:53,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:53,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:53,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:53,352 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:53,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=unmovedTable 2023-07-14 17:13:53,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:53,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=normal 2023-07-14 17:13:53,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:53,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=unmovedTable 2023-07-14 17:13:53,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:53,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from oldgroup to newgroup 2023-07-14 17:13:53,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 17:13:53,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:53,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:53,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 17:13:53,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-14 17:13:53,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RenameRSGroup 2023-07-14 17:13:53,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:53,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:53,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=newgroup 2023-07-14 17:13:53,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:53,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=testRename 2023-07-14 17:13:53,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:53,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=unmovedTable 2023-07-14 17:13:53,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:53,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:53,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:53,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [unmovedTable] to rsgroup default 2023-07-14 17:13:53,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 17:13:53,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:53,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:53,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 17:13:53,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 17:13:53,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-14 17:13:53,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(345): Moving region 64ab5cc83481d09558e7d84f19c0e88b to RSGroup default 2023-07-14 17:13:53,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=64ab5cc83481d09558e7d84f19c0e88b, REOPEN/MOVE 2023-07-14 17:13:53,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-14 17:13:53,401 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=64ab5cc83481d09558e7d84f19c0e88b, REOPEN/MOVE 2023-07-14 17:13:53,402 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=64ab5cc83481d09558e7d84f19c0e88b, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:53,402 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689354833402"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354833402"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354833402"}]},"ts":"1689354833402"} 2023-07-14 17:13:53,404 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 64ab5cc83481d09558e7d84f19c0e88b, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:53,556 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:53,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 64ab5cc83481d09558e7d84f19c0e88b, disabling compactions & flushes 2023-07-14 17:13:53,557 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:53,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:53,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. after waiting 0 ms 2023-07-14 17:13:53,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:53,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 17:13:53,563 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:53,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 64ab5cc83481d09558e7d84f19c0e88b: 2023-07-14 17:13:53,563 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 64ab5cc83481d09558e7d84f19c0e88b move to jenkins-hbase20.apache.org,46457,1689354809303 record at close sequenceid=5 2023-07-14 17:13:53,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:53,566 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=64ab5cc83481d09558e7d84f19c0e88b, regionState=CLOSED 2023-07-14 17:13:53,567 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689354833566"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354833566"}]},"ts":"1689354833566"} 2023-07-14 17:13:53,569 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-14 17:13:53,569 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 64ab5cc83481d09558e7d84f19c0e88b, server=jenkins-hbase20.apache.org,44093,1689354809062 in 164 msec 2023-07-14 17:13:53,570 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=64ab5cc83481d09558e7d84f19c0e88b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,46457,1689354809303; forceNewPlan=false, retain=false 2023-07-14 17:13:53,720 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=64ab5cc83481d09558e7d84f19c0e88b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:53,720 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689354833720"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354833720"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354833720"}]},"ts":"1689354833720"} 2023-07-14 17:13:53,722 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 64ab5cc83481d09558e7d84f19c0e88b, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:53,877 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:53,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 64ab5cc83481d09558e7d84f19c0e88b, NAME => 'unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:53,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:53,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:53,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:53,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:53,879 INFO [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:53,880 DEBUG [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b/ut 2023-07-14 17:13:53,880 DEBUG [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b/ut 2023-07-14 17:13:53,880 INFO [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 64ab5cc83481d09558e7d84f19c0e88b columnFamilyName ut 2023-07-14 17:13:53,881 INFO [StoreOpener-64ab5cc83481d09558e7d84f19c0e88b-1] regionserver.HStore(310): Store=64ab5cc83481d09558e7d84f19c0e88b/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:53,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:53,882 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:53,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:53,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 64ab5cc83481d09558e7d84f19c0e88b; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9755851200, jitterRate=-0.09141555428504944}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:53,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 64ab5cc83481d09558e7d84f19c0e88b: 2023-07-14 17:13:53,887 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b., pid=128, masterSystemTime=1689354833873 2023-07-14 17:13:53,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:53,888 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:53,889 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=64ab5cc83481d09558e7d84f19c0e88b, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:53,889 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689354833888"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354833888"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354833888"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354833888"}]},"ts":"1689354833888"} 2023-07-14 17:13:53,891 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-14 17:13:53,891 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 64ab5cc83481d09558e7d84f19c0e88b, server=jenkins-hbase20.apache.org,46457,1689354809303 in 168 msec 2023-07-14 17:13:53,892 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=64ab5cc83481d09558e7d84f19c0e88b, REOPEN/MOVE in 491 msec 2023-07-14 17:13:54,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-14 17:13:54,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-14 17:13:54,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:54,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:44093] to rsgroup default 2023-07-14 17:13:54,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 17:13:54,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:54,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:54,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 17:13:54,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 17:13:54,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-14 17:13:54,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,44093,1689354809062] are moved back to normal 2023-07-14 17:13:54,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-14 17:13:54,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:54,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup normal 2023-07-14 17:13:54,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:54,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:54,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 17:13:54,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-14 17:13:54,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:54,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:54,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:54,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:54,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:54,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:54,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:54,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:54,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 17:13:54,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 17:13:54,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:54,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [testRename] to rsgroup default 2023-07-14 17:13:54,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:54,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 17:13:54,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:54,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-14 17:13:54,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(345): Moving region 30ac29c4df468c2d4c926ec109f650db to RSGroup default 2023-07-14 17:13:54,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=30ac29c4df468c2d4c926ec109f650db, REOPEN/MOVE 2023-07-14 17:13:54,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-14 17:13:54,442 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=30ac29c4df468c2d4c926ec109f650db, REOPEN/MOVE 2023-07-14 17:13:54,443 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=30ac29c4df468c2d4c926ec109f650db, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:54,443 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689354834443"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354834443"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354834443"}]},"ts":"1689354834443"} 2023-07-14 17:13:54,444 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 30ac29c4df468c2d4c926ec109f650db, server=jenkins-hbase20.apache.org,38517,1689354813230}] 2023-07-14 17:13:54,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:54,599 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 30ac29c4df468c2d4c926ec109f650db, disabling compactions & flushes 2023-07-14 17:13:54,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:54,599 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:54,599 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. after waiting 0 ms 2023-07-14 17:13:54,599 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:54,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 17:13:54,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:54,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 30ac29c4df468c2d4c926ec109f650db: 2023-07-14 17:13:54,605 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 30ac29c4df468c2d4c926ec109f650db move to jenkins-hbase20.apache.org,44093,1689354809062 record at close sequenceid=5 2023-07-14 17:13:54,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:54,606 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=30ac29c4df468c2d4c926ec109f650db, regionState=CLOSED 2023-07-14 17:13:54,606 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689354834606"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354834606"}]},"ts":"1689354834606"} 2023-07-14 17:13:54,609 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-14 17:13:54,609 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 30ac29c4df468c2d4c926ec109f650db, server=jenkins-hbase20.apache.org,38517,1689354813230 in 164 msec 2023-07-14 17:13:54,609 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=30ac29c4df468c2d4c926ec109f650db, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44093,1689354809062; forceNewPlan=false, retain=false 2023-07-14 17:13:54,760 INFO [jenkins-hbase20:41281] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 17:13:54,760 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=30ac29c4df468c2d4c926ec109f650db, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:54,760 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689354834760"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354834760"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354834760"}]},"ts":"1689354834760"} 2023-07-14 17:13:54,762 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 30ac29c4df468c2d4c926ec109f650db, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:54,917 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:54,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 30ac29c4df468c2d4c926ec109f650db, NAME => 'testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:13:54,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:54,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:54,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:54,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:54,919 INFO [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:54,920 DEBUG [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db/tr 2023-07-14 17:13:54,920 DEBUG [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db/tr 2023-07-14 17:13:54,921 INFO [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 30ac29c4df468c2d4c926ec109f650db columnFamilyName tr 2023-07-14 17:13:54,921 INFO [StoreOpener-30ac29c4df468c2d4c926ec109f650db-1] regionserver.HStore(310): Store=30ac29c4df468c2d4c926ec109f650db/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:54,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:54,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:54,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:54,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 30ac29c4df468c2d4c926ec109f650db; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9717817600, jitterRate=-0.09495770931243896}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:54,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 30ac29c4df468c2d4c926ec109f650db: 2023-07-14 17:13:54,932 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db., pid=131, masterSystemTime=1689354834913 2023-07-14 17:13:54,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:54,934 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:54,934 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=30ac29c4df468c2d4c926ec109f650db, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:54,935 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689354834934"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354834934"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354834934"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354834934"}]},"ts":"1689354834934"} 2023-07-14 17:13:54,937 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-14 17:13:54,937 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 30ac29c4df468c2d4c926ec109f650db, server=jenkins-hbase20.apache.org,44093,1689354809062 in 174 msec 2023-07-14 17:13:54,938 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=30ac29c4df468c2d4c926ec109f650db, REOPEN/MOVE in 495 msec 2023-07-14 17:13:55,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-14 17:13:55,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-14 17:13:55,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:55,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361] to rsgroup default 2023-07-14 17:13:55,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:55,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 17:13:55,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:55,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-14 17:13:55,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,38517,1689354813230, jenkins-hbase20.apache.org,42361,1689354809221] are moved back to newgroup 2023-07-14 17:13:55,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-14 17:13:55,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:55,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup newgroup 2023-07-14 17:13:55,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:55,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:55,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:55,461 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:55,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:55,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:55,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:55,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:55,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:55,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:55,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:55,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:55,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:55,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 763 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356035470, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:55,471 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:55,472 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:55,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:55,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:55,473 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:55,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:55,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:55,494 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=514 (was 518), OpenFileDescriptor=793 (was 809), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=547 (was 525) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 173), AvailableMemoryMB=3697 (was 4028) 2023-07-14 17:13:55,494 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-14 17:13:55,510 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=514, OpenFileDescriptor=793, MaxFileDescriptor=60000, SystemLoadAverage=547, ProcessCount=173, AvailableMemoryMB=3696 2023-07-14 17:13:55,511 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-14 17:13:55,511 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-14 17:13:55,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:55,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:55,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:55,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:55,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:55,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:55,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:55,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:55,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:55,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:55,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:55,525 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:55,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:55,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:55,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:55,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:55,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:55,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:55,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:55,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:55,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:55,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 791 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356035540, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:55,541 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:55,543 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:55,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:55,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:55,544 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:55,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:55,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:55,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=nonexistent 2023-07-14 17:13:55,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:55,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, server=bogus:123 2023-07-14 17:13:55,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-14 17:13:55,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=bogus 2023-07-14 17:13:55,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:55,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bogus 2023-07-14 17:13:55,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:55,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 87 connection: 148.251.75.209:33882 deadline: 1689356035554, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-14 17:13:55,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [bogus:123] to rsgroup bogus 2023-07-14 17:13:55,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:55,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 806 service: MasterService methodName: ExecMasterService size: 96 connection: 148.251.75.209:33882 deadline: 1689356035557, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-14 17:13:55,559 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-14 17:13:55,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=true 2023-07-14 17:13:55,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//148.251.75.209 balance rsgroup, group=bogus 2023-07-14 17:13:55,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:55,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 810 service: MasterService methodName: ExecMasterService size: 88 connection: 148.251.75.209:33882 deadline: 1689356035563, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-14 17:13:55,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:55,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:55,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:55,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:55,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:55,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:55,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:55,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:55,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:55,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:55,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:55,576 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:55,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:55,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:55,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:55,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:55,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:55,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:55,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:55,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:55,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:55,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 834 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356035586, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:55,589 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:55,591 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:55,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:55,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:55,592 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:55,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:55,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:55,611 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=518 (was 514) Potentially hanging thread: hconnection-0x523b891-shared-pool-30 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x523b891-shared-pool-29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=793 (was 793), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=547 (was 547), ProcessCount=173 (was 173), AvailableMemoryMB=3694 (was 3696) 2023-07-14 17:13:55,611 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=518 is superior to 500 2023-07-14 17:13:55,634 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=518, OpenFileDescriptor=793, MaxFileDescriptor=60000, SystemLoadAverage=547, ProcessCount=173, AvailableMemoryMB=3695 2023-07-14 17:13:55,634 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=518 is superior to 500 2023-07-14 17:13:55,634 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-14 17:13:55,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:55,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:55,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:55,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:55,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:55,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:55,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:55,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:55,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:55,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:55,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:55,651 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:55,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:55,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:55,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:55,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:55,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:55,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:55,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:55,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:55,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:55,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 862 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356035674, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:55,674 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:55,675 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:55,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:55,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:55,676 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:55,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:55,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:55,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:55,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:55,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_testDisabledTableMove_863602471 2023-07-14 17:13:55,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_863602471 2023-07-14 17:13:55,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:55,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:55,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:55,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:55,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:55,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:55,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361] to rsgroup Group_testDisabledTableMove_863602471 2023-07-14 17:13:55,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:55,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_863602471 2023-07-14 17:13:55,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:55,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:55,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-14 17:13:55,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,38517,1689354813230, jenkins-hbase20.apache.org,42361,1689354809221] are moved back to default 2023-07-14 17:13:55,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_863602471 2023-07-14 17:13:55,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:55,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:55,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:55,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_863602471 2023-07-14 17:13:55,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:55,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:13:55,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-14 17:13:55,700 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:13:55,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-14 17:13:55,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-14 17:13:55,702 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:55,703 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_863602471 2023-07-14 17:13:55,703 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:55,703 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:55,717 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:13:55,722 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978 2023-07-14 17:13:55,722 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329 2023-07-14 17:13:55,722 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda 2023-07-14 17:13:55,722 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b 2023-07-14 17:13:55,722 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd 2023-07-14 17:13:55,722 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda empty. 2023-07-14 17:13:55,722 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b empty. 2023-07-14 17:13:55,722 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329 empty. 2023-07-14 17:13:55,723 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd empty. 2023-07-14 17:13:55,723 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978 empty. 2023-07-14 17:13:55,723 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda 2023-07-14 17:13:55,723 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b 2023-07-14 17:13:55,723 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329 2023-07-14 17:13:55,723 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd 2023-07-14 17:13:55,723 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978 2023-07-14 17:13:55,723 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-14 17:13:55,744 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-14 17:13:55,745 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 80279067a97a76ed485ce38010dc3f4b, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:55,746 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => f0ec6616537bd79b352c816f43e815cd, NAME => 'Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:55,746 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6463ada38c31630b64f8994abde70978, NAME => 'Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:55,766 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:55,766 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 6463ada38c31630b64f8994abde70978, disabling compactions & flushes 2023-07-14 17:13:55,766 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978. 2023-07-14 17:13:55,766 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978. 2023-07-14 17:13:55,766 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978. after waiting 0 ms 2023-07-14 17:13:55,766 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978. 2023-07-14 17:13:55,766 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978. 2023-07-14 17:13:55,767 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 6463ada38c31630b64f8994abde70978: 2023-07-14 17:13:55,767 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3abfc8527853197c98b1151898b5cdda, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:55,768 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:55,768 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing f0ec6616537bd79b352c816f43e815cd, disabling compactions & flushes 2023-07-14 17:13:55,768 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd. 2023-07-14 17:13:55,768 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd. 2023-07-14 17:13:55,768 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd. after waiting 0 ms 2023-07-14 17:13:55,768 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd. 2023-07-14 17:13:55,768 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd. 2023-07-14 17:13:55,768 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for f0ec6616537bd79b352c816f43e815cd: 2023-07-14 17:13:55,769 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => f9715b8d83fbe95a473ad5af6fc0e329, NAME => 'Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp 2023-07-14 17:13:55,779 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:55,779 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:55,779 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 3abfc8527853197c98b1151898b5cdda, disabling compactions & flushes 2023-07-14 17:13:55,779 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing f9715b8d83fbe95a473ad5af6fc0e329, disabling compactions & flushes 2023-07-14 17:13:55,779 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda. 2023-07-14 17:13:55,780 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329. 2023-07-14 17:13:55,780 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda. 2023-07-14 17:13:55,780 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329. 2023-07-14 17:13:55,780 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda. after waiting 0 ms 2023-07-14 17:13:55,780 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329. after waiting 0 ms 2023-07-14 17:13:55,780 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda. 2023-07-14 17:13:55,780 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329. 2023-07-14 17:13:55,780 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda. 2023-07-14 17:13:55,780 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329. 2023-07-14 17:13:55,780 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 3abfc8527853197c98b1151898b5cdda: 2023-07-14 17:13:55,780 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for f9715b8d83fbe95a473ad5af6fc0e329: 2023-07-14 17:13:55,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-14 17:13:55,947 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-14 17:13:56,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-14 17:13:56,169 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:56,169 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 80279067a97a76ed485ce38010dc3f4b, disabling compactions & flushes 2023-07-14 17:13:56,169 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b. 2023-07-14 17:13:56,169 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b. 2023-07-14 17:13:56,169 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b. after waiting 0 ms 2023-07-14 17:13:56,169 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b. 2023-07-14 17:13:56,169 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b. 2023-07-14 17:13:56,169 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 80279067a97a76ed485ce38010dc3f4b: 2023-07-14 17:13:56,172 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:13:56,173 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689354836173"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354836173"}]},"ts":"1689354836173"} 2023-07-14 17:13:56,173 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354836173"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354836173"}]},"ts":"1689354836173"} 2023-07-14 17:13:56,173 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354836173"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354836173"}]},"ts":"1689354836173"} 2023-07-14 17:13:56,173 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689354836173"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354836173"}]},"ts":"1689354836173"} 2023-07-14 17:13:56,173 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354836173"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354836173"}]},"ts":"1689354836173"} 2023-07-14 17:13:56,176 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-14 17:13:56,177 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:13:56,177 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354836177"}]},"ts":"1689354836177"} 2023-07-14 17:13:56,178 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-14 17:13:56,180 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:13:56,181 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:13:56,181 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:13:56,181 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:13:56,181 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6463ada38c31630b64f8994abde70978, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f0ec6616537bd79b352c816f43e815cd, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=80279067a97a76ed485ce38010dc3f4b, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3abfc8527853197c98b1151898b5cdda, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f9715b8d83fbe95a473ad5af6fc0e329, ASSIGN}] 2023-07-14 17:13:56,184 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f9715b8d83fbe95a473ad5af6fc0e329, ASSIGN 2023-07-14 17:13:56,184 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6463ada38c31630b64f8994abde70978, ASSIGN 2023-07-14 17:13:56,185 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f0ec6616537bd79b352c816f43e815cd, ASSIGN 2023-07-14 17:13:56,185 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3abfc8527853197c98b1151898b5cdda, ASSIGN 2023-07-14 17:13:56,187 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f9715b8d83fbe95a473ad5af6fc0e329, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46457,1689354809303; forceNewPlan=false, retain=false 2023-07-14 17:13:56,187 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6463ada38c31630b64f8994abde70978, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44093,1689354809062; forceNewPlan=false, retain=false 2023-07-14 17:13:56,187 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3abfc8527853197c98b1151898b5cdda, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44093,1689354809062; forceNewPlan=false, retain=false 2023-07-14 17:13:56,187 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=80279067a97a76ed485ce38010dc3f4b, ASSIGN 2023-07-14 17:13:56,187 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f0ec6616537bd79b352c816f43e815cd, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46457,1689354809303; forceNewPlan=false, retain=false 2023-07-14 17:13:56,188 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=80279067a97a76ed485ce38010dc3f4b, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44093,1689354809062; forceNewPlan=false, retain=false 2023-07-14 17:13:56,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-14 17:13:56,337 INFO [jenkins-hbase20:41281] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-14 17:13:56,341 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=3abfc8527853197c98b1151898b5cdda, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:56,341 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=80279067a97a76ed485ce38010dc3f4b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:56,341 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=6463ada38c31630b64f8994abde70978, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:56,341 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=f0ec6616537bd79b352c816f43e815cd, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:56,341 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=f9715b8d83fbe95a473ad5af6fc0e329, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:56,342 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354836341"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354836341"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354836341"}]},"ts":"1689354836341"} 2023-07-14 17:13:56,342 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689354836341"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354836341"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354836341"}]},"ts":"1689354836341"} 2023-07-14 17:13:56,342 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689354836341"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354836341"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354836341"}]},"ts":"1689354836341"} 2023-07-14 17:13:56,342 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354836341"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354836341"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354836341"}]},"ts":"1689354836341"} 2023-07-14 17:13:56,342 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354836341"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354836341"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354836341"}]},"ts":"1689354836341"} 2023-07-14 17:13:56,343 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=137, state=RUNNABLE; OpenRegionProcedure f9715b8d83fbe95a473ad5af6fc0e329, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:56,344 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=134, state=RUNNABLE; OpenRegionProcedure f0ec6616537bd79b352c816f43e815cd, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:56,346 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=133, state=RUNNABLE; OpenRegionProcedure 6463ada38c31630b64f8994abde70978, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:56,348 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=136, state=RUNNABLE; OpenRegionProcedure 3abfc8527853197c98b1151898b5cdda, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:56,348 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=135, state=RUNNABLE; OpenRegionProcedure 80279067a97a76ed485ce38010dc3f4b, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:56,508 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd. 2023-07-14 17:13:56,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f0ec6616537bd79b352c816f43e815cd, NAME => 'Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-14 17:13:56,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove f0ec6616537bd79b352c816f43e815cd 2023-07-14 17:13:56,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:56,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f0ec6616537bd79b352c816f43e815cd 2023-07-14 17:13:56,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f0ec6616537bd79b352c816f43e815cd 2023-07-14 17:13:56,509 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b. 2023-07-14 17:13:56,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 80279067a97a76ed485ce38010dc3f4b, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-14 17:13:56,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 80279067a97a76ed485ce38010dc3f4b 2023-07-14 17:13:56,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:56,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 80279067a97a76ed485ce38010dc3f4b 2023-07-14 17:13:56,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 80279067a97a76ed485ce38010dc3f4b 2023-07-14 17:13:56,511 INFO [StoreOpener-f0ec6616537bd79b352c816f43e815cd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f0ec6616537bd79b352c816f43e815cd 2023-07-14 17:13:56,515 INFO [StoreOpener-80279067a97a76ed485ce38010dc3f4b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 80279067a97a76ed485ce38010dc3f4b 2023-07-14 17:13:56,516 DEBUG [StoreOpener-f0ec6616537bd79b352c816f43e815cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd/f 2023-07-14 17:13:56,516 DEBUG [StoreOpener-f0ec6616537bd79b352c816f43e815cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd/f 2023-07-14 17:13:56,517 INFO [StoreOpener-f0ec6616537bd79b352c816f43e815cd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f0ec6616537bd79b352c816f43e815cd columnFamilyName f 2023-07-14 17:13:56,519 DEBUG [StoreOpener-80279067a97a76ed485ce38010dc3f4b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b/f 2023-07-14 17:13:56,519 INFO [StoreOpener-f0ec6616537bd79b352c816f43e815cd-1] regionserver.HStore(310): Store=f0ec6616537bd79b352c816f43e815cd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:56,521 DEBUG [StoreOpener-80279067a97a76ed485ce38010dc3f4b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b/f 2023-07-14 17:13:56,522 INFO [StoreOpener-80279067a97a76ed485ce38010dc3f4b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 80279067a97a76ed485ce38010dc3f4b columnFamilyName f 2023-07-14 17:13:56,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd 2023-07-14 17:13:56,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd 2023-07-14 17:13:56,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f0ec6616537bd79b352c816f43e815cd 2023-07-14 17:13:56,531 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:56,532 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f0ec6616537bd79b352c816f43e815cd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10564598720, jitterRate=-0.016095072031021118}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:56,532 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f0ec6616537bd79b352c816f43e815cd: 2023-07-14 17:13:56,532 INFO [StoreOpener-80279067a97a76ed485ce38010dc3f4b-1] regionserver.HStore(310): Store=80279067a97a76ed485ce38010dc3f4b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:56,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b 2023-07-14 17:13:56,533 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd., pid=139, masterSystemTime=1689354836499 2023-07-14 17:13:56,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b 2023-07-14 17:13:56,536 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd. 2023-07-14 17:13:56,536 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=f0ec6616537bd79b352c816f43e815cd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:56,536 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354836536"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354836536"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354836536"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354836536"}]},"ts":"1689354836536"} 2023-07-14 17:13:56,536 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd. 2023-07-14 17:13:56,537 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329. 2023-07-14 17:13:56,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f9715b8d83fbe95a473ad5af6fc0e329, NAME => 'Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-14 17:13:56,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove f9715b8d83fbe95a473ad5af6fc0e329 2023-07-14 17:13:56,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:56,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f9715b8d83fbe95a473ad5af6fc0e329 2023-07-14 17:13:56,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f9715b8d83fbe95a473ad5af6fc0e329 2023-07-14 17:13:56,542 INFO [StoreOpener-f9715b8d83fbe95a473ad5af6fc0e329-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f9715b8d83fbe95a473ad5af6fc0e329 2023-07-14 17:13:56,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 80279067a97a76ed485ce38010dc3f4b 2023-07-14 17:13:56,548 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=134 2023-07-14 17:13:56,549 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=134, state=SUCCESS; OpenRegionProcedure f0ec6616537bd79b352c816f43e815cd, server=jenkins-hbase20.apache.org,46457,1689354809303 in 199 msec 2023-07-14 17:13:56,550 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f0ec6616537bd79b352c816f43e815cd, ASSIGN in 368 msec 2023-07-14 17:13:56,551 DEBUG [StoreOpener-f9715b8d83fbe95a473ad5af6fc0e329-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329/f 2023-07-14 17:13:56,551 DEBUG [StoreOpener-f9715b8d83fbe95a473ad5af6fc0e329-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329/f 2023-07-14 17:13:56,551 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:56,552 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 80279067a97a76ed485ce38010dc3f4b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9730920640, jitterRate=-0.09373739361763}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:56,552 INFO [StoreOpener-f9715b8d83fbe95a473ad5af6fc0e329-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f9715b8d83fbe95a473ad5af6fc0e329 columnFamilyName f 2023-07-14 17:13:56,552 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 80279067a97a76ed485ce38010dc3f4b: 2023-07-14 17:13:56,553 INFO [StoreOpener-f9715b8d83fbe95a473ad5af6fc0e329-1] regionserver.HStore(310): Store=f9715b8d83fbe95a473ad5af6fc0e329/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:56,553 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b., pid=141, masterSystemTime=1689354836504 2023-07-14 17:13:56,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329 2023-07-14 17:13:56,554 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329 2023-07-14 17:13:56,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b. 2023-07-14 17:13:56,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b. 2023-07-14 17:13:56,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda. 2023-07-14 17:13:56,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3abfc8527853197c98b1151898b5cdda, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-14 17:13:56,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 3abfc8527853197c98b1151898b5cdda 2023-07-14 17:13:56,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:56,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 3abfc8527853197c98b1151898b5cdda 2023-07-14 17:13:56,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 3abfc8527853197c98b1151898b5cdda 2023-07-14 17:13:56,556 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=80279067a97a76ed485ce38010dc3f4b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:56,556 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354836556"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354836556"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354836556"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354836556"}]},"ts":"1689354836556"} 2023-07-14 17:13:56,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f9715b8d83fbe95a473ad5af6fc0e329 2023-07-14 17:13:56,557 INFO [StoreOpener-3abfc8527853197c98b1151898b5cdda-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3abfc8527853197c98b1151898b5cdda 2023-07-14 17:13:56,560 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=135 2023-07-14 17:13:56,560 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=135, state=SUCCESS; OpenRegionProcedure 80279067a97a76ed485ce38010dc3f4b, server=jenkins-hbase20.apache.org,44093,1689354809062 in 211 msec 2023-07-14 17:13:56,560 DEBUG [StoreOpener-3abfc8527853197c98b1151898b5cdda-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda/f 2023-07-14 17:13:56,561 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:56,561 DEBUG [StoreOpener-3abfc8527853197c98b1151898b5cdda-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda/f 2023-07-14 17:13:56,562 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f9715b8d83fbe95a473ad5af6fc0e329; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11668630400, jitterRate=0.08672589063644409}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:56,562 INFO [StoreOpener-3abfc8527853197c98b1151898b5cdda-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3abfc8527853197c98b1151898b5cdda columnFamilyName f 2023-07-14 17:13:56,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f9715b8d83fbe95a473ad5af6fc0e329: 2023-07-14 17:13:56,562 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=80279067a97a76ed485ce38010dc3f4b, ASSIGN in 379 msec 2023-07-14 17:13:56,562 INFO [StoreOpener-3abfc8527853197c98b1151898b5cdda-1] regionserver.HStore(310): Store=3abfc8527853197c98b1151898b5cdda/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:56,563 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329., pid=138, masterSystemTime=1689354836499 2023-07-14 17:13:56,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda 2023-07-14 17:13:56,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda 2023-07-14 17:13:56,565 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329. 2023-07-14 17:13:56,566 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329. 2023-07-14 17:13:56,569 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=f9715b8d83fbe95a473ad5af6fc0e329, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:56,569 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689354836569"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354836569"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354836569"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354836569"}]},"ts":"1689354836569"} 2023-07-14 17:13:56,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 3abfc8527853197c98b1151898b5cdda 2023-07-14 17:13:56,573 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=137 2023-07-14 17:13:56,574 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=137, state=SUCCESS; OpenRegionProcedure f9715b8d83fbe95a473ad5af6fc0e329, server=jenkins-hbase20.apache.org,46457,1689354809303 in 229 msec 2023-07-14 17:13:56,575 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f9715b8d83fbe95a473ad5af6fc0e329, ASSIGN in 392 msec 2023-07-14 17:13:56,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:56,584 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 3abfc8527853197c98b1151898b5cdda; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11595155680, jitterRate=0.07988302409648895}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:56,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 3abfc8527853197c98b1151898b5cdda: 2023-07-14 17:13:56,585 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda., pid=142, masterSystemTime=1689354836504 2023-07-14 17:13:56,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda. 2023-07-14 17:13:56,588 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda. 2023-07-14 17:13:56,588 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978. 2023-07-14 17:13:56,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6463ada38c31630b64f8994abde70978, NAME => 'Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-14 17:13:56,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 6463ada38c31630b64f8994abde70978 2023-07-14 17:13:56,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:13:56,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 6463ada38c31630b64f8994abde70978 2023-07-14 17:13:56,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 6463ada38c31630b64f8994abde70978 2023-07-14 17:13:56,589 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=3abfc8527853197c98b1151898b5cdda, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:56,589 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354836589"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354836589"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354836589"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354836589"}]},"ts":"1689354836589"} 2023-07-14 17:13:56,603 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=136 2023-07-14 17:13:56,604 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=136, state=SUCCESS; OpenRegionProcedure 3abfc8527853197c98b1151898b5cdda, server=jenkins-hbase20.apache.org,44093,1689354809062 in 242 msec 2023-07-14 17:13:56,605 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3abfc8527853197c98b1151898b5cdda, ASSIGN in 422 msec 2023-07-14 17:13:56,606 INFO [StoreOpener-6463ada38c31630b64f8994abde70978-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6463ada38c31630b64f8994abde70978 2023-07-14 17:13:56,608 DEBUG [StoreOpener-6463ada38c31630b64f8994abde70978-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978/f 2023-07-14 17:13:56,608 DEBUG [StoreOpener-6463ada38c31630b64f8994abde70978-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978/f 2023-07-14 17:13:56,609 INFO [StoreOpener-6463ada38c31630b64f8994abde70978-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6463ada38c31630b64f8994abde70978 columnFamilyName f 2023-07-14 17:13:56,610 INFO [StoreOpener-6463ada38c31630b64f8994abde70978-1] regionserver.HStore(310): Store=6463ada38c31630b64f8994abde70978/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:13:56,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978 2023-07-14 17:13:56,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978 2023-07-14 17:13:56,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 6463ada38c31630b64f8994abde70978 2023-07-14 17:13:56,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:13:56,619 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 6463ada38c31630b64f8994abde70978; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10398305760, jitterRate=-0.031582310795784}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:13:56,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 6463ada38c31630b64f8994abde70978: 2023-07-14 17:13:56,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978., pid=140, masterSystemTime=1689354836504 2023-07-14 17:13:56,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978. 2023-07-14 17:13:56,625 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978. 2023-07-14 17:13:56,629 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=6463ada38c31630b64f8994abde70978, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:56,629 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689354836629"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354836629"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354836629"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354836629"}]},"ts":"1689354836629"} 2023-07-14 17:13:56,644 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=133 2023-07-14 17:13:56,644 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=133, state=SUCCESS; OpenRegionProcedure 6463ada38c31630b64f8994abde70978, server=jenkins-hbase20.apache.org,44093,1689354809062 in 289 msec 2023-07-14 17:13:56,646 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-14 17:13:56,647 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6463ada38c31630b64f8994abde70978, ASSIGN in 463 msec 2023-07-14 17:13:56,651 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:13:56,651 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354836651"}]},"ts":"1689354836651"} 2023-07-14 17:13:56,654 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-14 17:13:56,656 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:13:56,660 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 958 msec 2023-07-14 17:13:56,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-14 17:13:56,806 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-14 17:13:56,807 DEBUG [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-14 17:13:56,807 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:56,810 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-14 17:13:56,811 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:56,811 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-14 17:13:56,811 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:56,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-14 17:13:56,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:56,819 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-14 17:13:56,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testDisabledTableMove 2023-07-14 17:13:56,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-14 17:13:56,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-14 17:13:56,823 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354836823"}]},"ts":"1689354836823"} 2023-07-14 17:13:56,824 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-14 17:13:56,825 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-14 17:13:56,826 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6463ada38c31630b64f8994abde70978, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f0ec6616537bd79b352c816f43e815cd, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=80279067a97a76ed485ce38010dc3f4b, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3abfc8527853197c98b1151898b5cdda, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f9715b8d83fbe95a473ad5af6fc0e329, UNASSIGN}] 2023-07-14 17:13:56,830 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3abfc8527853197c98b1151898b5cdda, UNASSIGN 2023-07-14 17:13:56,831 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6463ada38c31630b64f8994abde70978, UNASSIGN 2023-07-14 17:13:56,831 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f9715b8d83fbe95a473ad5af6fc0e329, UNASSIGN 2023-07-14 17:13:56,831 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=80279067a97a76ed485ce38010dc3f4b, UNASSIGN 2023-07-14 17:13:56,831 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f0ec6616537bd79b352c816f43e815cd, UNASSIGN 2023-07-14 17:13:56,831 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=3abfc8527853197c98b1151898b5cdda, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:56,832 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=6463ada38c31630b64f8994abde70978, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:56,832 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354836831"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354836831"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354836831"}]},"ts":"1689354836831"} 2023-07-14 17:13:56,832 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689354836832"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354836832"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354836832"}]},"ts":"1689354836832"} 2023-07-14 17:13:56,832 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=f9715b8d83fbe95a473ad5af6fc0e329, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:56,832 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=f0ec6616537bd79b352c816f43e815cd, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:56,832 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689354836832"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354836832"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354836832"}]},"ts":"1689354836832"} 2023-07-14 17:13:56,832 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354836832"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354836832"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354836832"}]},"ts":"1689354836832"} 2023-07-14 17:13:56,832 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=80279067a97a76ed485ce38010dc3f4b, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:56,833 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354836832"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354836832"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354836832"}]},"ts":"1689354836832"} 2023-07-14 17:13:56,833 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=147, state=RUNNABLE; CloseRegionProcedure 3abfc8527853197c98b1151898b5cdda, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:56,834 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=144, state=RUNNABLE; CloseRegionProcedure 6463ada38c31630b64f8994abde70978, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:56,835 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=148, state=RUNNABLE; CloseRegionProcedure f9715b8d83fbe95a473ad5af6fc0e329, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:56,837 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=145, state=RUNNABLE; CloseRegionProcedure f0ec6616537bd79b352c816f43e815cd, server=jenkins-hbase20.apache.org,46457,1689354809303}] 2023-07-14 17:13:56,838 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=146, state=RUNNABLE; CloseRegionProcedure 80279067a97a76ed485ce38010dc3f4b, server=jenkins-hbase20.apache.org,44093,1689354809062}] 2023-07-14 17:13:56,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-14 17:13:56,985 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 3abfc8527853197c98b1151898b5cdda 2023-07-14 17:13:56,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 3abfc8527853197c98b1151898b5cdda, disabling compactions & flushes 2023-07-14 17:13:56,986 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda. 2023-07-14 17:13:56,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda. 2023-07-14 17:13:56,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda. after waiting 0 ms 2023-07-14 17:13:56,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda. 2023-07-14 17:13:56,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f9715b8d83fbe95a473ad5af6fc0e329 2023-07-14 17:13:56,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f9715b8d83fbe95a473ad5af6fc0e329, disabling compactions & flushes 2023-07-14 17:13:56,990 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329. 2023-07-14 17:13:56,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329. 2023-07-14 17:13:56,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329. after waiting 0 ms 2023-07-14 17:13:56,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329. 2023-07-14 17:13:56,991 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:56,991 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda. 2023-07-14 17:13:56,991 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 3abfc8527853197c98b1151898b5cdda: 2023-07-14 17:13:56,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 3abfc8527853197c98b1151898b5cdda 2023-07-14 17:13:56,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 6463ada38c31630b64f8994abde70978 2023-07-14 17:13:56,999 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 6463ada38c31630b64f8994abde70978, disabling compactions & flushes 2023-07-14 17:13:56,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978. 2023-07-14 17:13:56,999 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978. 2023-07-14 17:13:57,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978. after waiting 0 ms 2023-07-14 17:13:57,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978. 2023-07-14 17:13:57,001 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=3abfc8527853197c98b1151898b5cdda, regionState=CLOSED 2023-07-14 17:13:57,002 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354837001"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354837001"}]},"ts":"1689354837001"} 2023-07-14 17:13:57,005 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:57,006 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978. 2023-07-14 17:13:57,006 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 6463ada38c31630b64f8994abde70978: 2023-07-14 17:13:57,006 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:57,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 6463ada38c31630b64f8994abde70978 2023-07-14 17:13:57,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 80279067a97a76ed485ce38010dc3f4b 2023-07-14 17:13:57,009 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 80279067a97a76ed485ce38010dc3f4b, disabling compactions & flushes 2023-07-14 17:13:57,009 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b. 2023-07-14 17:13:57,010 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b. 2023-07-14 17:13:57,010 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b. after waiting 0 ms 2023-07-14 17:13:57,010 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b. 2023-07-14 17:13:57,010 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329. 2023-07-14 17:13:57,010 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f9715b8d83fbe95a473ad5af6fc0e329: 2023-07-14 17:13:57,010 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=147 2023-07-14 17:13:57,010 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=147, state=SUCCESS; CloseRegionProcedure 3abfc8527853197c98b1151898b5cdda, server=jenkins-hbase20.apache.org,44093,1689354809062 in 171 msec 2023-07-14 17:13:57,011 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=6463ada38c31630b64f8994abde70978, regionState=CLOSED 2023-07-14 17:13:57,011 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689354837011"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354837011"}]},"ts":"1689354837011"} 2023-07-14 17:13:57,014 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3abfc8527853197c98b1151898b5cdda, UNASSIGN in 184 msec 2023-07-14 17:13:57,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f9715b8d83fbe95a473ad5af6fc0e329 2023-07-14 17:13:57,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f0ec6616537bd79b352c816f43e815cd 2023-07-14 17:13:57,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f0ec6616537bd79b352c816f43e815cd, disabling compactions & flushes 2023-07-14 17:13:57,015 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd. 2023-07-14 17:13:57,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd. 2023-07-14 17:13:57,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd. after waiting 0 ms 2023-07-14 17:13:57,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd. 2023-07-14 17:13:57,016 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=f9715b8d83fbe95a473ad5af6fc0e329, regionState=CLOSED 2023-07-14 17:13:57,016 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689354837016"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354837016"}]},"ts":"1689354837016"} 2023-07-14 17:13:57,017 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=144 2023-07-14 17:13:57,017 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=144, state=SUCCESS; CloseRegionProcedure 6463ada38c31630b64f8994abde70978, server=jenkins-hbase20.apache.org,44093,1689354809062 in 179 msec 2023-07-14 17:13:57,018 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6463ada38c31630b64f8994abde70978, UNASSIGN in 191 msec 2023-07-14 17:13:57,019 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=148 2023-07-14 17:13:57,019 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=148, state=SUCCESS; CloseRegionProcedure f9715b8d83fbe95a473ad5af6fc0e329, server=jenkins-hbase20.apache.org,46457,1689354809303 in 182 msec 2023-07-14 17:13:57,020 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f9715b8d83fbe95a473ad5af6fc0e329, UNASSIGN in 193 msec 2023-07-14 17:13:57,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:57,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b. 2023-07-14 17:13:57,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 80279067a97a76ed485ce38010dc3f4b: 2023-07-14 17:13:57,026 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 80279067a97a76ed485ce38010dc3f4b 2023-07-14 17:13:57,026 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=80279067a97a76ed485ce38010dc3f4b, regionState=CLOSED 2023-07-14 17:13:57,027 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354837026"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354837026"}]},"ts":"1689354837026"} 2023-07-14 17:13:57,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:13:57,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd. 2023-07-14 17:13:57,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f0ec6616537bd79b352c816f43e815cd: 2023-07-14 17:13:57,031 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f0ec6616537bd79b352c816f43e815cd 2023-07-14 17:13:57,032 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=146 2023-07-14 17:13:57,032 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=146, state=SUCCESS; CloseRegionProcedure 80279067a97a76ed485ce38010dc3f4b, server=jenkins-hbase20.apache.org,44093,1689354809062 in 190 msec 2023-07-14 17:13:57,032 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=f0ec6616537bd79b352c816f43e815cd, regionState=CLOSED 2023-07-14 17:13:57,032 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689354837032"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354837032"}]},"ts":"1689354837032"} 2023-07-14 17:13:57,035 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=80279067a97a76ed485ce38010dc3f4b, UNASSIGN in 206 msec 2023-07-14 17:13:57,036 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=145 2023-07-14 17:13:57,036 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=145, state=SUCCESS; CloseRegionProcedure f0ec6616537bd79b352c816f43e815cd, server=jenkins-hbase20.apache.org,46457,1689354809303 in 196 msec 2023-07-14 17:13:57,038 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=143 2023-07-14 17:13:57,039 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f0ec6616537bd79b352c816f43e815cd, UNASSIGN in 210 msec 2023-07-14 17:13:57,040 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354837040"}]},"ts":"1689354837040"} 2023-07-14 17:13:57,041 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-14 17:13:57,043 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-14 17:13:57,046 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 226 msec 2023-07-14 17:13:57,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-14 17:13:57,126 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-14 17:13:57,126 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_863602471 2023-07-14 17:13:57,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_863602471 2023-07-14 17:13:57,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:57,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_863602471 2023-07-14 17:13:57,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:57,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:57,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-14 17:13:57,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_863602471, current retry=0 2023-07-14 17:13:57,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_863602471. 2023-07-14 17:13:57,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:57,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:57,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:57,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-14 17:13:57,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:13:57,142 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-14 17:13:57,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testDisabledTableMove 2023-07-14 17:13:57,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:57,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 924 service: MasterService methodName: DisableTable size: 87 connection: 148.251.75.209:33882 deadline: 1689354897142, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-14 17:13:57,143 DEBUG [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-14 17:13:57,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete Group_testDisabledTableMove 2023-07-14 17:13:57,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-14 17:13:57,146 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-14 17:13:57,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_863602471' 2023-07-14 17:13:57,147 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-14 17:13:57,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:57,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_863602471 2023-07-14 17:13:57,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:57,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:57,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-14 17:13:57,158 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978 2023-07-14 17:13:57,158 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda 2023-07-14 17:13:57,158 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329 2023-07-14 17:13:57,158 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b 2023-07-14 17:13:57,158 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd 2023-07-14 17:13:57,161 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978/recovered.edits] 2023-07-14 17:13:57,162 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b/recovered.edits] 2023-07-14 17:13:57,163 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd/recovered.edits] 2023-07-14 17:13:57,163 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329/recovered.edits] 2023-07-14 17:13:57,163 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda/f, FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda/recovered.edits] 2023-07-14 17:13:57,175 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b/recovered.edits/4.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b/recovered.edits/4.seqid 2023-07-14 17:13:57,175 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329/recovered.edits/4.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329/recovered.edits/4.seqid 2023-07-14 17:13:57,177 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978/recovered.edits/4.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978/recovered.edits/4.seqid 2023-07-14 17:13:57,177 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f9715b8d83fbe95a473ad5af6fc0e329 2023-07-14 17:13:57,178 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/80279067a97a76ed485ce38010dc3f4b 2023-07-14 17:13:57,179 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/6463ada38c31630b64f8994abde70978 2023-07-14 17:13:57,179 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda/recovered.edits/4.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda/recovered.edits/4.seqid 2023-07-14 17:13:57,180 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd/recovered.edits/4.seqid to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/archive/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd/recovered.edits/4.seqid 2023-07-14 17:13:57,181 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/3abfc8527853197c98b1151898b5cdda 2023-07-14 17:13:57,181 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/.tmp/data/default/Group_testDisabledTableMove/f0ec6616537bd79b352c816f43e815cd 2023-07-14 17:13:57,181 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-14 17:13:57,184 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-14 17:13:57,186 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-14 17:13:57,191 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-14 17:13:57,192 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-14 17:13:57,192 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-14 17:13:57,192 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354837192"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:57,192 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354837192"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:57,192 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354837192"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:57,192 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354837192"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:57,192 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354837192"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:57,194 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-14 17:13:57,194 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6463ada38c31630b64f8994abde70978, NAME => 'Group_testDisabledTableMove,,1689354835698.6463ada38c31630b64f8994abde70978.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => f0ec6616537bd79b352c816f43e815cd, NAME => 'Group_testDisabledTableMove,aaaaa,1689354835698.f0ec6616537bd79b352c816f43e815cd.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 80279067a97a76ed485ce38010dc3f4b, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689354835698.80279067a97a76ed485ce38010dc3f4b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 3abfc8527853197c98b1151898b5cdda, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689354835698.3abfc8527853197c98b1151898b5cdda.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => f9715b8d83fbe95a473ad5af6fc0e329, NAME => 'Group_testDisabledTableMove,zzzzz,1689354835698.f9715b8d83fbe95a473ad5af6fc0e329.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-14 17:13:57,194 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-14 17:13:57,195 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689354837194"}]},"ts":"9223372036854775807"} 2023-07-14 17:13:57,196 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-14 17:13:57,198 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-14 17:13:57,199 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 54 msec 2023-07-14 17:13:57,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-14 17:13:57,252 INFO [Listener at localhost.localdomain/41607] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-14 17:13:57,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:57,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:57,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:57,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:57,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:57,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361] to rsgroup default 2023-07-14 17:13:57,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:57,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_863602471 2023-07-14 17:13:57,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:57,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:13:57,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_863602471, current retry=0 2023-07-14 17:13:57,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,38517,1689354813230, jenkins-hbase20.apache.org,42361,1689354809221] are moved back to Group_testDisabledTableMove_863602471 2023-07-14 17:13:57,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_863602471 => default 2023-07-14 17:13:57,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:57,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_testDisabledTableMove_863602471 2023-07-14 17:13:57,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:57,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:57,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 17:13:57,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:57,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:57,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:57,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:57,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:57,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:57,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:57,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:57,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:57,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:57,284 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:57,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:57,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:57,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:57,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:57,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:57,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:57,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:57,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:57,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:57,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 958 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356037299, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:57,300 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:57,303 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:57,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:57,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:57,304 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:57,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:57,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:57,329 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=521 (was 518) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1280957736_17 at /127.0.0.1:37670 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x37307bc-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2d4e8d6d-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1469634444_17 at /127.0.0.1:54110 [Waiting for operation #28] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=825 (was 793) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=543 (was 547), ProcessCount=172 (was 173), AvailableMemoryMB=3625 (was 3695) 2023-07-14 17:13:57,329 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-14 17:13:57,348 INFO [Listener at localhost.localdomain/41607] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=521, OpenFileDescriptor=825, MaxFileDescriptor=60000, SystemLoadAverage=543, ProcessCount=172, AvailableMemoryMB=3624 2023-07-14 17:13:57,348 WARN [Listener at localhost.localdomain/41607] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-14 17:13:57,348 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-14 17:13:57,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:57,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:57,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:13:57,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:13:57,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:13:57,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:13:57,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:13:57,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:13:57,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:57,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:13:57,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:13:57,362 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:13:57,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:13:57,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:13:57,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:13:57,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:13:57,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:13:57,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:57,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:57,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:41281] to rsgroup master 2023-07-14 17:13:57,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:13:57,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] ipc.CallRunner(144): callId: 986 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:33882 deadline: 1689356037370, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. 2023-07-14 17:13:57,370 WARN [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:41281 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:13:57,372 INFO [Listener at localhost.localdomain/41607] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:13:57,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:13:57,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:13:57,373 INFO [Listener at localhost.localdomain/41607] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:38517, jenkins-hbase20.apache.org:42361, jenkins-hbase20.apache.org:44093, jenkins-hbase20.apache.org:46457], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:13:57,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:13:57,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41281] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:13:57,373 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-14 17:13:57,373 INFO [Listener at localhost.localdomain/41607] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-14 17:13:57,374 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x41299102 to 127.0.0.1:54612 2023-07-14 17:13:57,374 DEBUG [Listener at localhost.localdomain/41607] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:13:57,375 DEBUG [Listener at localhost.localdomain/41607] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-14 17:13:57,375 DEBUG [Listener at localhost.localdomain/41607] util.JVMClusterUtil(257): Found active master hash=1867990685, stopped=false 2023-07-14 17:13:57,375 DEBUG [Listener at localhost.localdomain/41607] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-14 17:13:57,375 DEBUG [Listener at localhost.localdomain/41607] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-14 17:13:57,375 INFO [Listener at localhost.localdomain/41607] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,41281,1689354806808 2023-07-14 17:13:57,391 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:13:57,392 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:13:57,392 INFO [Listener at localhost.localdomain/41607] procedure2.ProcedureExecutor(629): Stopping 2023-07-14 17:13:57,392 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:13:57,392 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:13:57,391 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:13:57,392 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:13:57,392 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:13:57,392 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:13:57,392 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:13:57,392 DEBUG [Listener at localhost.localdomain/41607] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6660e458 to 127.0.0.1:54612 2023-07-14 17:13:57,393 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:13:57,393 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:13:57,393 DEBUG [Listener at localhost.localdomain/41607] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:13:57,393 INFO [Listener at localhost.localdomain/41607] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,44093,1689354809062' ***** 2023-07-14 17:13:57,393 INFO [Listener at localhost.localdomain/41607] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 17:13:57,393 INFO [Listener at localhost.localdomain/41607] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,42361,1689354809221' ***** 2023-07-14 17:13:57,393 INFO [Listener at localhost.localdomain/41607] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 17:13:57,393 INFO [Listener at localhost.localdomain/41607] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,46457,1689354809303' ***** 2023-07-14 17:13:57,393 INFO [Listener at localhost.localdomain/41607] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 17:13:57,394 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:13:57,393 INFO [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:13:57,393 INFO [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:13:57,394 INFO [Listener at localhost.localdomain/41607] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,38517,1689354813230' ***** 2023-07-14 17:13:57,394 INFO [Listener at localhost.localdomain/41607] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 17:13:57,394 INFO [RS:3;jenkins-hbase20:38517] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:13:57,406 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-14 17:13:57,406 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-14 17:13:57,413 INFO [RS:0;jenkins-hbase20:44093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@d602e46{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:13:57,413 INFO [RS:2;jenkins-hbase20:46457] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@66be370b{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:13:57,413 INFO [RS:1;jenkins-hbase20:42361] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5b72dc05{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:13:57,413 INFO [RS:3;jenkins-hbase20:38517] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3bc525f6{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:13:57,418 INFO [RS:1;jenkins-hbase20:42361] server.AbstractConnector(383): Stopped ServerConnector@66040974{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:13:57,418 INFO [RS:0;jenkins-hbase20:44093] server.AbstractConnector(383): Stopped ServerConnector@688f1242{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:13:57,418 INFO [RS:3;jenkins-hbase20:38517] server.AbstractConnector(383): Stopped ServerConnector@4d21a747{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:13:57,418 INFO [RS:2;jenkins-hbase20:46457] server.AbstractConnector(383): Stopped ServerConnector@5699ce09{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:13:57,418 INFO [RS:3;jenkins-hbase20:38517] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:13:57,418 INFO [RS:0;jenkins-hbase20:44093] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:13:57,418 INFO [RS:1;jenkins-hbase20:42361] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:13:57,418 INFO [RS:2;jenkins-hbase20:46457] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:13:57,419 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:13:57,420 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 17:13:57,420 INFO [RS:3;jenkins-hbase20:38517] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4807f72c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:13:57,420 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:13:57,420 INFO [RS:2;jenkins-hbase20:46457] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@73a2030a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:13:57,420 INFO [RS:0;jenkins-hbase20:44093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4c0b26d3{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:13:57,422 INFO [RS:2;jenkins-hbase20:46457] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@22b17312{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.log.dir/,STOPPED} 2023-07-14 17:13:57,422 INFO [RS:0;jenkins-hbase20:44093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@57dd3d48{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.log.dir/,STOPPED} 2023-07-14 17:13:57,422 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 17:13:57,422 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 17:13:57,422 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:13:57,421 INFO [RS:3;jenkins-hbase20:38517] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@34e957d0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.log.dir/,STOPPED} 2023-07-14 17:13:57,421 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 17:13:57,420 INFO [RS:1;jenkins-hbase20:42361] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7125c9f8{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:13:57,424 INFO [RS:1;jenkins-hbase20:42361] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@130df82f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.log.dir/,STOPPED} 2023-07-14 17:13:57,426 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:13:57,426 INFO [RS:3;jenkins-hbase20:38517] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 17:13:57,426 INFO [RS:2;jenkins-hbase20:46457] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 17:13:57,426 INFO [RS:0;jenkins-hbase20:44093] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 17:13:57,426 INFO [RS:2;jenkins-hbase20:46457] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 17:13:57,426 INFO [RS:0;jenkins-hbase20:44093] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 17:13:57,427 INFO [RS:0;jenkins-hbase20:44093] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 17:13:57,426 INFO [RS:2;jenkins-hbase20:46457] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 17:13:57,427 INFO [RS:3;jenkins-hbase20:38517] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 17:13:57,427 INFO [RS:1;jenkins-hbase20:42361] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 17:13:57,427 INFO [RS:1;jenkins-hbase20:42361] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 17:13:57,427 INFO [RS:1;jenkins-hbase20:42361] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 17:13:57,427 INFO [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(3305): Received CLOSE for 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:57,427 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(3305): Received CLOSE for 64ab5cc83481d09558e7d84f19c0e88b 2023-07-14 17:13:57,427 INFO [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:57,427 INFO [RS:3;jenkins-hbase20:38517] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 17:13:57,427 DEBUG [RS:1;jenkins-hbase20:42361] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x52881952 to 127.0.0.1:54612 2023-07-14 17:13:57,427 INFO [RS:3;jenkins-hbase20:38517] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:57,427 DEBUG [RS:1;jenkins-hbase20:42361] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:13:57,427 DEBUG [RS:3;jenkins-hbase20:38517] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6f382546 to 127.0.0.1:54612 2023-07-14 17:13:57,427 INFO [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,42361,1689354809221; all regions closed. 2023-07-14 17:13:57,427 DEBUG [RS:3;jenkins-hbase20:38517] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:13:57,428 INFO [RS:3;jenkins-hbase20:38517] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,38517,1689354813230; all regions closed. 2023-07-14 17:13:57,430 INFO [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:57,431 DEBUG [RS:0;jenkins-hbase20:44093] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x00ccb61f to 127.0.0.1:54612 2023-07-14 17:13:57,431 DEBUG [RS:0;jenkins-hbase20:44093] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:13:57,431 INFO [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-14 17:13:57,431 DEBUG [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1478): Online Regions={30ac29c4df468c2d4c926ec109f650db=testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db.} 2023-07-14 17:13:57,430 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(3305): Received CLOSE for 773f58cde6eff004015f5064f08a8726 2023-07-14 17:13:57,431 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(3305): Received CLOSE for f9434bc3110cf1c29610cbaaa78c2a02 2023-07-14 17:13:57,431 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:57,431 DEBUG [RS:2;jenkins-hbase20:46457] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5e1da330 to 127.0.0.1:54612 2023-07-14 17:13:57,431 DEBUG [RS:2;jenkins-hbase20:46457] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:13:57,431 INFO [RS:2;jenkins-hbase20:46457] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 17:13:57,431 INFO [RS:2;jenkins-hbase20:46457] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 17:13:57,431 INFO [RS:2;jenkins-hbase20:46457] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 17:13:57,431 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-14 17:13:57,431 DEBUG [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1504): Waiting on 30ac29c4df468c2d4c926ec109f650db 2023-07-14 17:13:57,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 64ab5cc83481d09558e7d84f19c0e88b, disabling compactions & flushes 2023-07-14 17:13:57,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 30ac29c4df468c2d4c926ec109f650db, disabling compactions & flushes 2023-07-14 17:13:57,434 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:57,434 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-14 17:13:57,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:57,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. after waiting 0 ms 2023-07-14 17:13:57,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:57,434 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:57,434 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 17:13:57,434 DEBUG [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1478): Online Regions={64ab5cc83481d09558e7d84f19c0e88b=unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b., 1588230740=hbase:meta,,1.1588230740, 773f58cde6eff004015f5064f08a8726=hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726., f9434bc3110cf1c29610cbaaa78c2a02=hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02.} 2023-07-14 17:13:57,435 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 17:13:57,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:57,435 DEBUG [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1504): Waiting on 1588230740, 64ab5cc83481d09558e7d84f19c0e88b, 773f58cde6eff004015f5064f08a8726, f9434bc3110cf1c29610cbaaa78c2a02 2023-07-14 17:13:57,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. after waiting 0 ms 2023-07-14 17:13:57,435 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 17:13:57,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:57,435 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 17:13:57,435 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 17:13:57,436 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.54 KB heapSize=61.27 KB 2023-07-14 17:13:57,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/testRename/30ac29c4df468c2d4c926ec109f650db/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-14 17:13:57,448 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:57,448 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 30ac29c4df468c2d4c926ec109f650db: 2023-07-14 17:13:57,449 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689354830016.30ac29c4df468c2d4c926ec109f650db. 2023-07-14 17:13:57,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/default/unmovedTable/64ab5cc83481d09558e7d84f19c0e88b/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-14 17:13:57,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:57,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 64ab5cc83481d09558e7d84f19c0e88b: 2023-07-14 17:13:57,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689354831709.64ab5cc83481d09558e7d84f19c0e88b. 2023-07-14 17:13:57,452 DEBUG [RS:3;jenkins-hbase20:38517] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs 2023-07-14 17:13:57,452 INFO [RS:3;jenkins-hbase20:38517] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C38517%2C1689354813230:(num 1689354813533) 2023-07-14 17:13:57,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 773f58cde6eff004015f5064f08a8726, disabling compactions & flushes 2023-07-14 17:13:57,452 DEBUG [RS:3;jenkins-hbase20:38517] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:13:57,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:57,452 INFO [RS:3;jenkins-hbase20:38517] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:13:57,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:57,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. after waiting 0 ms 2023-07-14 17:13:57,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:57,453 INFO [RS:3;jenkins-hbase20:38517] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-14 17:13:57,460 INFO [RS:3;jenkins-hbase20:38517] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 17:13:57,460 INFO [RS:3;jenkins-hbase20:38517] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 17:13:57,461 INFO [RS:3;jenkins-hbase20:38517] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 17:13:57,463 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:13:57,464 INFO [RS:3;jenkins-hbase20:38517] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:38517 2023-07-14 17:13:57,492 DEBUG [RS:1;jenkins-hbase20:42361] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs 2023-07-14 17:13:57,492 INFO [RS:1;jenkins-hbase20:42361] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C42361%2C1689354809221.meta:.meta(num 1689354812251) 2023-07-14 17:13:57,499 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:57,499 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:57,499 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:13:57,499 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:13:57,499 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:57,499 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38517,1689354813230 2023-07-14 17:13:57,499 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:13:57,499 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:13:57,499 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:13:57,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/namespace/773f58cde6eff004015f5064f08a8726/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-14 17:13:57,513 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.61 KB at sequenceid=212 (bloomFilter=false), to=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/info/ef7be1e066b34440855aeb013478c534 2023-07-14 17:13:57,524 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef7be1e066b34440855aeb013478c534 2023-07-14 17:13:57,536 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:57,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 773f58cde6eff004015f5064f08a8726: 2023-07-14 17:13:57,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689354812486.773f58cde6eff004015f5064f08a8726. 2023-07-14 17:13:57,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f9434bc3110cf1c29610cbaaa78c2a02, disabling compactions & flushes 2023-07-14 17:13:57,536 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. 2023-07-14 17:13:57,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. 2023-07-14 17:13:57,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. after waiting 0 ms 2023-07-14 17:13:57,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. 2023-07-14 17:13:57,536 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing f9434bc3110cf1c29610cbaaa78c2a02 1/1 column families, dataSize=28.80 KB heapSize=47.28 KB 2023-07-14 17:13:57,553 DEBUG [RS:1;jenkins-hbase20:42361] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs 2023-07-14 17:13:57,553 INFO [RS:1;jenkins-hbase20:42361] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C42361%2C1689354809221:(num 1689354811901) 2023-07-14 17:13:57,553 DEBUG [RS:1;jenkins-hbase20:42361] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:13:57,553 INFO [RS:1;jenkins-hbase20:42361] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:13:57,559 INFO [RS:1;jenkins-hbase20:42361] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-14 17:13:57,560 INFO [RS:1;jenkins-hbase20:42361] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 17:13:57,560 INFO [RS:1;jenkins-hbase20:42361] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 17:13:57,560 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:13:57,560 INFO [RS:1;jenkins-hbase20:42361] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 17:13:57,561 INFO [RS:1;jenkins-hbase20:42361] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:42361 2023-07-14 17:13:57,594 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=212 (bloomFilter=false), to=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/rep_barrier/106fd775f7ab4f999ce073e8de048f55 2023-07-14 17:13:57,597 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,38517,1689354813230] 2023-07-14 17:13:57,597 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:57,597 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:57,597 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:57,597 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,42361,1689354809221 2023-07-14 17:13:57,597 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,38517,1689354813230; numProcessing=1 2023-07-14 17:13:57,597 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:13:57,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=28.80 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/rsgroup/f9434bc3110cf1c29610cbaaa78c2a02/.tmp/m/cd4f2d59da1e42b6925fb61ca2a962ef 2023-07-14 17:13:57,602 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 106fd775f7ab4f999ce073e8de048f55 2023-07-14 17:13:57,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cd4f2d59da1e42b6925fb61ca2a962ef 2023-07-14 17:13:57,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/rsgroup/f9434bc3110cf1c29610cbaaa78c2a02/.tmp/m/cd4f2d59da1e42b6925fb61ca2a962ef as hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/rsgroup/f9434bc3110cf1c29610cbaaa78c2a02/m/cd4f2d59da1e42b6925fb61ca2a962ef 2023-07-14 17:13:57,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cd4f2d59da1e42b6925fb61ca2a962ef 2023-07-14 17:13:57,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/rsgroup/f9434bc3110cf1c29610cbaaa78c2a02/m/cd4f2d59da1e42b6925fb61ca2a962ef, entries=28, sequenceid=95, filesize=6.1 K 2023-07-14 17:13:57,619 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~28.80 KB/29491, heapSize ~47.27 KB/48400, currentSize=0 B/0 for f9434bc3110cf1c29610cbaaa78c2a02 in 83ms, sequenceid=95, compaction requested=false 2023-07-14 17:13:57,632 INFO [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44093,1689354809062; all regions closed. 2023-07-14 17:13:57,632 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=212 (bloomFilter=false), to=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/table/f21d3394051e4efbac8322c9c6a4a567 2023-07-14 17:13:57,635 DEBUG [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1504): Waiting on 1588230740, f9434bc3110cf1c29610cbaaa78c2a02 2023-07-14 17:13:57,637 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f21d3394051e4efbac8322c9c6a4a567 2023-07-14 17:13:57,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/rsgroup/f9434bc3110cf1c29610cbaaa78c2a02/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-14 17:13:57,639 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/info/ef7be1e066b34440855aeb013478c534 as hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info/ef7be1e066b34440855aeb013478c534 2023-07-14 17:13:57,642 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 17:13:57,643 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. 2023-07-14 17:13:57,643 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f9434bc3110cf1c29610cbaaa78c2a02: 2023-07-14 17:13:57,643 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689354812418.f9434bc3110cf1c29610cbaaa78c2a02. 2023-07-14 17:13:57,645 DEBUG [RS:0;jenkins-hbase20:44093] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs 2023-07-14 17:13:57,645 INFO [RS:0;jenkins-hbase20:44093] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C44093%2C1689354809062.meta:.meta(num 1689354814703) 2023-07-14 17:13:57,653 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef7be1e066b34440855aeb013478c534 2023-07-14 17:13:57,653 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/info/ef7be1e066b34440855aeb013478c534, entries=62, sequenceid=212, filesize=11.9 K 2023-07-14 17:13:57,654 DEBUG [RS:0;jenkins-hbase20:44093] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs 2023-07-14 17:13:57,654 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/rep_barrier/106fd775f7ab4f999ce073e8de048f55 as hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/rep_barrier/106fd775f7ab4f999ce073e8de048f55 2023-07-14 17:13:57,654 INFO [RS:0;jenkins-hbase20:44093] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C44093%2C1689354809062:(num 1689354811900) 2023-07-14 17:13:57,654 DEBUG [RS:0;jenkins-hbase20:44093] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:13:57,654 INFO [RS:0;jenkins-hbase20:44093] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:13:57,654 INFO [RS:0;jenkins-hbase20:44093] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-14 17:13:57,655 INFO [RS:0;jenkins-hbase20:44093] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 17:13:57,655 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:13:57,655 INFO [RS:0;jenkins-hbase20:44093] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 17:13:57,655 INFO [RS:0;jenkins-hbase20:44093] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 17:13:57,656 INFO [RS:0;jenkins-hbase20:44093] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44093 2023-07-14 17:13:57,662 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 106fd775f7ab4f999ce073e8de048f55 2023-07-14 17:13:57,662 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/rep_barrier/106fd775f7ab4f999ce073e8de048f55, entries=8, sequenceid=212, filesize=5.8 K 2023-07-14 17:13:57,665 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/.tmp/table/f21d3394051e4efbac8322c9c6a4a567 as hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table/f21d3394051e4efbac8322c9c6a4a567 2023-07-14 17:13:57,674 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f21d3394051e4efbac8322c9c6a4a567 2023-07-14 17:13:57,674 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/table/f21d3394051e4efbac8322c9c6a4a567, entries=16, sequenceid=212, filesize=6.0 K 2023-07-14 17:13:57,675 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.54 KB/38436, heapSize ~61.23 KB/62696, currentSize=0 B/0 for 1588230740 in 240ms, sequenceid=212, compaction requested=true 2023-07-14 17:13:57,676 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-14 17:13:57,695 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/data/hbase/meta/1588230740/recovered.edits/215.seqid, newMaxSeqId=215, maxSeqId=100 2023-07-14 17:13:57,696 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 17:13:57,696 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 17:13:57,697 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 17:13:57,697 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-14 17:13:57,697 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:57,697 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:57,698 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,38517,1689354813230 already deleted, retry=false 2023-07-14 17:13:57,698 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,38517,1689354813230 expired; onlineServers=3 2023-07-14 17:13:57,697 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44093,1689354809062 2023-07-14 17:13:57,698 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,44093,1689354809062] 2023-07-14 17:13:57,698 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,44093,1689354809062; numProcessing=2 2023-07-14 17:13:57,699 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,44093,1689354809062 already deleted, retry=false 2023-07-14 17:13:57,699 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,44093,1689354809062 expired; onlineServers=2 2023-07-14 17:13:57,699 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,42361,1689354809221] 2023-07-14 17:13:57,699 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,42361,1689354809221; numProcessing=3 2023-07-14 17:13:57,699 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,42361,1689354809221 already deleted, retry=false 2023-07-14 17:13:57,700 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,42361,1689354809221 expired; onlineServers=1 2023-07-14 17:13:57,736 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:13:57,736 INFO [RS:3;jenkins-hbase20:38517] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,38517,1689354813230; zookeeper connection closed. 2023-07-14 17:13:57,736 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:38517-0x1008c792048000b, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:13:57,736 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6bf92db6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6bf92db6 2023-07-14 17:13:57,815 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-14 17:13:57,815 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-14 17:13:57,836 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,46457,1689354809303; all regions closed. 2023-07-14 17:13:57,842 DEBUG [RS:2;jenkins-hbase20:46457] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs 2023-07-14 17:13:57,843 INFO [RS:2;jenkins-hbase20:46457] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C46457%2C1689354809303.meta:.meta(num 1689354820903) 2023-07-14 17:13:57,852 DEBUG [RS:2;jenkins-hbase20:46457] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/oldWALs 2023-07-14 17:13:57,853 INFO [RS:2;jenkins-hbase20:46457] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C46457%2C1689354809303:(num 1689354811900) 2023-07-14 17:13:57,853 DEBUG [RS:2;jenkins-hbase20:46457] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:13:57,853 INFO [RS:2;jenkins-hbase20:46457] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:13:57,853 INFO [RS:2;jenkins-hbase20:46457] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-14 17:13:57,853 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:13:57,854 INFO [RS:2;jenkins-hbase20:46457] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:46457 2023-07-14 17:13:57,855 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,46457,1689354809303 2023-07-14 17:13:57,855 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:13:57,856 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,46457,1689354809303] 2023-07-14 17:13:57,856 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,46457,1689354809303; numProcessing=4 2023-07-14 17:13:57,856 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,46457,1689354809303 already deleted, retry=false 2023-07-14 17:13:57,856 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,46457,1689354809303 expired; onlineServers=0 2023-07-14 17:13:57,856 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,41281,1689354806808' ***** 2023-07-14 17:13:57,856 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-14 17:13:57,857 DEBUG [M:0;jenkins-hbase20:41281] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e6a4ca6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:13:57,857 INFO [M:0;jenkins-hbase20:41281] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:13:57,859 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-14 17:13:57,859 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:13:57,860 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:13:57,860 INFO [M:0;jenkins-hbase20:41281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6a5048{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-14 17:13:57,861 INFO [M:0;jenkins-hbase20:41281] server.AbstractConnector(383): Stopped ServerConnector@23da46f6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:13:57,861 INFO [M:0;jenkins-hbase20:41281] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:13:57,861 INFO [M:0;jenkins-hbase20:41281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@70894e64{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:13:57,862 INFO [M:0;jenkins-hbase20:41281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@67779d68{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.log.dir/,STOPPED} 2023-07-14 17:13:57,862 INFO [M:0;jenkins-hbase20:41281] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,41281,1689354806808 2023-07-14 17:13:57,862 INFO [M:0;jenkins-hbase20:41281] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,41281,1689354806808; all regions closed. 2023-07-14 17:13:57,862 DEBUG [M:0;jenkins-hbase20:41281] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:13:57,862 INFO [M:0;jenkins-hbase20:41281] master.HMaster(1491): Stopping master jetty server 2023-07-14 17:13:57,863 INFO [M:0;jenkins-hbase20:41281] server.AbstractConnector(383): Stopped ServerConnector@38eb127{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:13:57,864 DEBUG [M:0;jenkins-hbase20:41281] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-14 17:13:57,864 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-14 17:13:57,864 DEBUG [M:0;jenkins-hbase20:41281] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-14 17:13:57,864 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689354811339] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689354811339,5,FailOnTimeoutGroup] 2023-07-14 17:13:57,864 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689354811338] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689354811338,5,FailOnTimeoutGroup] 2023-07-14 17:13:57,864 INFO [M:0;jenkins-hbase20:41281] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-14 17:13:57,864 INFO [M:0;jenkins-hbase20:41281] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-14 17:13:57,864 INFO [M:0;jenkins-hbase20:41281] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-07-14 17:13:57,864 DEBUG [M:0;jenkins-hbase20:41281] master.HMaster(1512): Stopping service threads 2023-07-14 17:13:57,864 INFO [M:0;jenkins-hbase20:41281] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-14 17:13:57,865 ERROR [M:0;jenkins-hbase20:41281] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-14 17:13:57,865 INFO [M:0;jenkins-hbase20:41281] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-14 17:13:57,865 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-14 17:13:57,866 DEBUG [M:0;jenkins-hbase20:41281] zookeeper.ZKUtil(398): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-14 17:13:57,866 WARN [M:0;jenkins-hbase20:41281] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-14 17:13:57,866 INFO [M:0;jenkins-hbase20:41281] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-14 17:13:57,866 INFO [M:0;jenkins-hbase20:41281] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-14 17:13:57,866 DEBUG [M:0;jenkins-hbase20:41281] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-14 17:13:57,866 INFO [M:0;jenkins-hbase20:41281] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:13:57,866 DEBUG [M:0;jenkins-hbase20:41281] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:13:57,867 DEBUG [M:0;jenkins-hbase20:41281] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-14 17:13:57,867 DEBUG [M:0;jenkins-hbase20:41281] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:13:57,867 INFO [M:0;jenkins-hbase20:41281] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.85 KB heapSize=622.10 KB 2023-07-14 17:13:57,886 INFO [M:0;jenkins-hbase20:41281] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.85 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/eac6c84c25f745648e0552863fe52421 2023-07-14 17:13:57,892 DEBUG [M:0;jenkins-hbase20:41281] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/eac6c84c25f745648e0552863fe52421 as hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/eac6c84c25f745648e0552863fe52421 2023-07-14 17:13:57,900 INFO [M:0;jenkins-hbase20:41281] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/eac6c84c25f745648e0552863fe52421, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-14 17:13:57,901 INFO [M:0;jenkins-hbase20:41281] regionserver.HRegion(2948): Finished flush of dataSize ~519.85 KB/532329, heapSize ~622.09 KB/637016, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 34ms, sequenceid=1152, compaction requested=false 2023-07-14 17:13:57,903 INFO [M:0;jenkins-hbase20:41281] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:13:57,903 DEBUG [M:0;jenkins-hbase20:41281] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 17:13:57,908 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:13:57,908 INFO [M:0;jenkins-hbase20:41281] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-14 17:13:57,909 INFO [M:0;jenkins-hbase20:41281] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:41281 2023-07-14 17:13:57,913 DEBUG [M:0;jenkins-hbase20:41281] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,41281,1689354806808 already deleted, retry=false 2023-07-14 17:13:57,937 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:13:57,937 INFO [RS:0;jenkins-hbase20:44093] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44093,1689354809062; zookeeper connection closed. 2023-07-14 17:13:57,937 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:44093-0x1008c7920480001, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:13:57,937 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@439e76d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@439e76d 2023-07-14 17:13:58,037 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:13:58,037 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:42361-0x1008c7920480002, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:13:58,037 INFO [RS:1;jenkins-hbase20:42361] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,42361,1689354809221; zookeeper connection closed. 2023-07-14 17:13:58,037 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@711c5550] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@711c5550 2023-07-14 17:13:58,137 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:13:58,137 INFO [M:0;jenkins-hbase20:41281] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,41281,1689354806808; zookeeper connection closed. 2023-07-14 17:13:58,137 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): master:41281-0x1008c7920480000, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:13:58,237 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:13:58,237 INFO [RS:2;jenkins-hbase20:46457] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,46457,1689354809303; zookeeper connection closed. 2023-07-14 17:13:58,237 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): regionserver:46457-0x1008c7920480003, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:13:58,238 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@176e4711] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@176e4711 2023-07-14 17:13:58,238 INFO [Listener at localhost.localdomain/41607] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-14 17:13:58,238 WARN [Listener at localhost.localdomain/41607] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 17:13:58,243 INFO [Listener at localhost.localdomain/41607] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 17:13:58,326 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-14 17:13:58,377 WARN [BP-103047219-148.251.75.209-1689354803494 heartbeating to localhost.localdomain/127.0.0.1:37685] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 17:13:58,377 WARN [BP-103047219-148.251.75.209-1689354803494 heartbeating to localhost.localdomain/127.0.0.1:37685] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-103047219-148.251.75.209-1689354803494 (Datanode Uuid 839d2ce9-3c37-45c7-82fb-078e6b4b00f0) service to localhost.localdomain/127.0.0.1:37685 2023-07-14 17:13:58,379 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1/dfs/data/data5/current/BP-103047219-148.251.75.209-1689354803494] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:13:58,379 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1/dfs/data/data6/current/BP-103047219-148.251.75.209-1689354803494] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:13:58,382 WARN [Listener at localhost.localdomain/41607] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 17:13:58,396 INFO [Listener at localhost.localdomain/41607] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 17:13:58,500 WARN [BP-103047219-148.251.75.209-1689354803494 heartbeating to localhost.localdomain/127.0.0.1:37685] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 17:13:58,500 WARN [BP-103047219-148.251.75.209-1689354803494 heartbeating to localhost.localdomain/127.0.0.1:37685] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-103047219-148.251.75.209-1689354803494 (Datanode Uuid 987069f3-95b8-4d3e-8d3b-02249727026b) service to localhost.localdomain/127.0.0.1:37685 2023-07-14 17:13:58,500 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1/dfs/data/data3/current/BP-103047219-148.251.75.209-1689354803494] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:13:58,501 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1/dfs/data/data4/current/BP-103047219-148.251.75.209-1689354803494] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:13:58,502 WARN [Listener at localhost.localdomain/41607] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 17:13:58,509 INFO [Listener at localhost.localdomain/41607] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 17:13:58,513 WARN [BP-103047219-148.251.75.209-1689354803494 heartbeating to localhost.localdomain/127.0.0.1:37685] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 17:13:58,513 WARN [BP-103047219-148.251.75.209-1689354803494 heartbeating to localhost.localdomain/127.0.0.1:37685] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-103047219-148.251.75.209-1689354803494 (Datanode Uuid 03883fa9-9050-4c0d-923e-7589418f6294) service to localhost.localdomain/127.0.0.1:37685 2023-07-14 17:13:58,514 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1/dfs/data/data1/current/BP-103047219-148.251.75.209-1689354803494] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:13:58,514 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/cluster_bb0b43db-a4f7-6f63-4a62-b361ee4d7ce1/dfs/data/data2/current/BP-103047219-148.251.75.209-1689354803494] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:13:58,555 INFO [Listener at localhost.localdomain/41607] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-14 17:13:58,678 INFO [Listener at localhost.localdomain/41607] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-14 17:13:58,750 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-14 17:13:58,750 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-14 17:13:58,750 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.log.dir so I do NOT create it in target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7 2023-07-14 17:13:58,751 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/af36c1c9-2991-2f9d-42dc-fd6bb2f491a2/hadoop.tmp.dir so I do NOT create it in target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7 2023-07-14 17:13:58,751 INFO [Listener at localhost.localdomain/41607] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/cluster_ff025c63-2f88-cccf-3ed2-27add37df3a6, deleteOnExit=true 2023-07-14 17:13:58,751 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-14 17:13:58,751 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/test.cache.data in system properties and HBase conf 2023-07-14 17:13:58,751 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/hadoop.tmp.dir in system properties and HBase conf 2023-07-14 17:13:58,751 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/hadoop.log.dir in system properties and HBase conf 2023-07-14 17:13:58,752 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-14 17:13:58,752 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-14 17:13:58,752 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-14 17:13:58,752 DEBUG [Listener at localhost.localdomain/41607] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-14 17:13:58,752 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-14 17:13:58,752 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-14 17:13:58,753 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-14 17:13:58,753 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-14 17:13:58,753 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-14 17:13:58,753 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-14 17:13:58,753 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-14 17:13:58,753 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-14 17:13:58,754 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-14 17:13:58,754 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/nfs.dump.dir in system properties and HBase conf 2023-07-14 17:13:58,754 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/java.io.tmpdir in system properties and HBase conf 2023-07-14 17:13:58,754 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-14 17:13:58,754 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-14 17:13:58,754 INFO [Listener at localhost.localdomain/41607] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-14 17:13:58,759 WARN [Listener at localhost.localdomain/41607] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-14 17:13:58,759 WARN [Listener at localhost.localdomain/41607] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-14 17:13:58,773 DEBUG [Listener at localhost.localdomain/41607-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1008c792048000a, quorum=127.0.0.1:54612, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-14 17:13:58,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1008c792048000a, quorum=127.0.0.1:54612, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-14 17:13:58,794 WARN [Listener at localhost.localdomain/41607] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 17:13:58,796 INFO [Listener at localhost.localdomain/41607] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 17:13:58,807 INFO [Listener at localhost.localdomain/41607] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/java.io.tmpdir/Jetty_localhost_localdomain_35391_hdfs____.fkzyv1/webapp 2023-07-14 17:13:58,902 INFO [Listener at localhost.localdomain/41607] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:35391 2023-07-14 17:13:58,905 WARN [Listener at localhost.localdomain/41607] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-14 17:13:58,906 WARN [Listener at localhost.localdomain/41607] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-14 17:13:58,952 WARN [Listener at localhost.localdomain/38043] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 17:13:58,965 WARN [Listener at localhost.localdomain/38043] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 17:13:58,967 WARN [Listener at localhost.localdomain/38043] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 17:13:58,968 INFO [Listener at localhost.localdomain/38043] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 17:13:58,975 INFO [Listener at localhost.localdomain/38043] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/java.io.tmpdir/Jetty_localhost_45941_datanode____eycpfg/webapp 2023-07-14 17:13:59,084 INFO [Listener at localhost.localdomain/38043] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45941 2023-07-14 17:13:59,093 WARN [Listener at localhost.localdomain/33863] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 17:13:59,123 WARN [Listener at localhost.localdomain/33863] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 17:13:59,125 WARN [Listener at localhost.localdomain/33863] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 17:13:59,126 INFO [Listener at localhost.localdomain/33863] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 17:13:59,132 INFO [Listener at localhost.localdomain/33863] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/java.io.tmpdir/Jetty_localhost_42951_datanode____vx8dz2/webapp 2023-07-14 17:13:59,206 INFO [Listener at localhost.localdomain/33863] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42951 2023-07-14 17:13:59,214 WARN [Listener at localhost.localdomain/42835] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 17:13:59,248 WARN [Listener at localhost.localdomain/42835] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 17:13:59,250 WARN [Listener at localhost.localdomain/42835] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 17:13:59,251 INFO [Listener at localhost.localdomain/42835] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 17:13:59,391 INFO [Listener at localhost.localdomain/42835] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/java.io.tmpdir/Jetty_localhost_40715_datanode____cza7t2/webapp 2023-07-14 17:13:59,518 INFO [Listener at localhost.localdomain/42835] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40715 2023-07-14 17:13:59,566 WARN [Listener at localhost.localdomain/39045] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 17:13:59,576 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xad9003af5c83f34b: Processing first storage report for DS-8fc74cd8-06ac-4991-a9a6-6fadf564bcf4 from datanode febb32fe-2bed-4bfa-a5a3-3c97b1c3d6b5 2023-07-14 17:13:59,579 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xad9003af5c83f34b: from storage DS-8fc74cd8-06ac-4991-a9a6-6fadf564bcf4 node DatanodeRegistration(127.0.0.1:40965, datanodeUuid=febb32fe-2bed-4bfa-a5a3-3c97b1c3d6b5, infoPort=37233, infoSecurePort=0, ipcPort=42835, storageInfo=lv=-57;cid=testClusterID;nsid=1462374033;c=1689354838762), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:13:59,579 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xad9003af5c83f34b: Processing first storage report for DS-e6abca6e-ea82-43aa-8d52-87c1f54220b6 from datanode febb32fe-2bed-4bfa-a5a3-3c97b1c3d6b5 2023-07-14 17:13:59,579 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xad9003af5c83f34b: from storage DS-e6abca6e-ea82-43aa-8d52-87c1f54220b6 node DatanodeRegistration(127.0.0.1:40965, datanodeUuid=febb32fe-2bed-4bfa-a5a3-3c97b1c3d6b5, infoPort=37233, infoSecurePort=0, ipcPort=42835, storageInfo=lv=-57;cid=testClusterID;nsid=1462374033;c=1689354838762), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:13:59,602 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x52eb1717604c2fe2: Processing first storage report for DS-67e82162-dcd3-4d5a-b207-877faa5b6e55 from datanode 01cf9cdd-9a96-4337-a484-1414d8b27402 2023-07-14 17:13:59,603 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x52eb1717604c2fe2: from storage DS-67e82162-dcd3-4d5a-b207-877faa5b6e55 node DatanodeRegistration(127.0.0.1:35273, datanodeUuid=01cf9cdd-9a96-4337-a484-1414d8b27402, infoPort=42373, infoSecurePort=0, ipcPort=33863, storageInfo=lv=-57;cid=testClusterID;nsid=1462374033;c=1689354838762), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:13:59,603 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x52eb1717604c2fe2: Processing first storage report for DS-d3a4421c-a168-4fb6-a622-55aa98166a16 from datanode 01cf9cdd-9a96-4337-a484-1414d8b27402 2023-07-14 17:13:59,603 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x52eb1717604c2fe2: from storage DS-d3a4421c-a168-4fb6-a622-55aa98166a16 node DatanodeRegistration(127.0.0.1:35273, datanodeUuid=01cf9cdd-9a96-4337-a484-1414d8b27402, infoPort=42373, infoSecurePort=0, ipcPort=33863, storageInfo=lv=-57;cid=testClusterID;nsid=1462374033;c=1689354838762), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:13:59,681 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5410a6406adf5a6a: Processing first storage report for DS-e55ea2d3-c34d-43b2-8489-7a133a83d725 from datanode 8b15ed60-a34d-4cac-a2f4-411ba8dbf8b4 2023-07-14 17:13:59,681 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5410a6406adf5a6a: from storage DS-e55ea2d3-c34d-43b2-8489-7a133a83d725 node DatanodeRegistration(127.0.0.1:41529, datanodeUuid=8b15ed60-a34d-4cac-a2f4-411ba8dbf8b4, infoPort=39381, infoSecurePort=0, ipcPort=39045, storageInfo=lv=-57;cid=testClusterID;nsid=1462374033;c=1689354838762), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:13:59,685 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5410a6406adf5a6a: Processing first storage report for DS-06dbe7c2-e463-4ff3-a820-322f43f72067 from datanode 8b15ed60-a34d-4cac-a2f4-411ba8dbf8b4 2023-07-14 17:13:59,685 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5410a6406adf5a6a: from storage DS-06dbe7c2-e463-4ff3-a820-322f43f72067 node DatanodeRegistration(127.0.0.1:41529, datanodeUuid=8b15ed60-a34d-4cac-a2f4-411ba8dbf8b4, infoPort=39381, infoSecurePort=0, ipcPort=39045, storageInfo=lv=-57;cid=testClusterID;nsid=1462374033;c=1689354838762), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:13:59,706 DEBUG [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7 2023-07-14 17:13:59,719 INFO [Listener at localhost.localdomain/39045] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/cluster_ff025c63-2f88-cccf-3ed2-27add37df3a6/zookeeper_0, clientPort=56537, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/cluster_ff025c63-2f88-cccf-3ed2-27add37df3a6/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/cluster_ff025c63-2f88-cccf-3ed2-27add37df3a6/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-14 17:13:59,721 INFO [Listener at localhost.localdomain/39045] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56537 2023-07-14 17:13:59,722 INFO [Listener at localhost.localdomain/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:59,723 INFO [Listener at localhost.localdomain/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:59,772 INFO [Listener at localhost.localdomain/39045] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65 with version=8 2023-07-14 17:13:59,772 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/hbase-staging 2023-07-14 17:13:59,774 DEBUG [Listener at localhost.localdomain/39045] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-14 17:13:59,774 DEBUG [Listener at localhost.localdomain/39045] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-14 17:13:59,774 DEBUG [Listener at localhost.localdomain/39045] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-14 17:13:59,774 DEBUG [Listener at localhost.localdomain/39045] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-14 17:13:59,775 INFO [Listener at localhost.localdomain/39045] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:13:59,775 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:59,775 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:59,775 INFO [Listener at localhost.localdomain/39045] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:13:59,776 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:59,776 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:13:59,776 INFO [Listener at localhost.localdomain/39045] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:13:59,779 INFO [Listener at localhost.localdomain/39045] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44713 2023-07-14 17:13:59,780 INFO [Listener at localhost.localdomain/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:59,781 INFO [Listener at localhost.localdomain/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:13:59,782 INFO [Listener at localhost.localdomain/39045] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44713 connecting to ZooKeeper ensemble=127.0.0.1:56537 2023-07-14 17:13:59,792 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:447130x0, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:13:59,795 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44713-0x1008c79a3240000 connected 2023-07-14 17:13:59,874 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ZKUtil(164): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:13:59,875 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ZKUtil(164): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:13:59,876 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ZKUtil(164): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:13:59,877 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44713 2023-07-14 17:13:59,882 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44713 2023-07-14 17:13:59,883 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44713 2023-07-14 17:13:59,902 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44713 2023-07-14 17:13:59,905 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44713 2023-07-14 17:13:59,908 INFO [Listener at localhost.localdomain/39045] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:13:59,908 INFO [Listener at localhost.localdomain/39045] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:13:59,909 INFO [Listener at localhost.localdomain/39045] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:13:59,909 INFO [Listener at localhost.localdomain/39045] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-14 17:13:59,909 INFO [Listener at localhost.localdomain/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:13:59,909 INFO [Listener at localhost.localdomain/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:13:59,910 INFO [Listener at localhost.localdomain/39045] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:13:59,910 INFO [Listener at localhost.localdomain/39045] http.HttpServer(1146): Jetty bound to port 35509 2023-07-14 17:13:59,911 INFO [Listener at localhost.localdomain/39045] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:13:59,928 INFO [Listener at localhost.localdomain/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:59,929 INFO [Listener at localhost.localdomain/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@ac5238b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:13:59,930 INFO [Listener at localhost.localdomain/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:59,930 INFO [Listener at localhost.localdomain/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d0b0b53{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:13:59,946 INFO [Listener at localhost.localdomain/39045] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:13:59,952 INFO [Listener at localhost.localdomain/39045] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:13:59,953 INFO [Listener at localhost.localdomain/39045] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:13:59,953 INFO [Listener at localhost.localdomain/39045] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 17:13:59,955 INFO [Listener at localhost.localdomain/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:13:59,957 INFO [Listener at localhost.localdomain/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@125e4d20{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-14 17:13:59,958 INFO [Listener at localhost.localdomain/39045] server.AbstractConnector(333): Started ServerConnector@6a4dfbdb{HTTP/1.1, (http/1.1)}{0.0.0.0:35509} 2023-07-14 17:13:59,958 INFO [Listener at localhost.localdomain/39045] server.Server(415): Started @38607ms 2023-07-14 17:13:59,958 INFO [Listener at localhost.localdomain/39045] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65, hbase.cluster.distributed=false 2023-07-14 17:13:59,978 INFO [Listener at localhost.localdomain/39045] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:13:59,978 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:59,978 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:59,978 INFO [Listener at localhost.localdomain/39045] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:13:59,978 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:13:59,979 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:13:59,979 INFO [Listener at localhost.localdomain/39045] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:13:59,988 INFO [Listener at localhost.localdomain/39045] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44435 2023-07-14 17:13:59,989 INFO [Listener at localhost.localdomain/39045] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 17:14:00,004 DEBUG [Listener at localhost.localdomain/39045] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 17:14:00,005 INFO [Listener at localhost.localdomain/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:00,007 INFO [Listener at localhost.localdomain/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:00,008 INFO [Listener at localhost.localdomain/39045] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44435 connecting to ZooKeeper ensemble=127.0.0.1:56537 2023-07-14 17:14:00,011 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:444350x0, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:14:00,013 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ZKUtil(164): regionserver:444350x0, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:14:00,013 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44435-0x1008c79a3240001 connected 2023-07-14 17:14:00,014 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ZKUtil(164): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:00,014 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ZKUtil(164): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:14:00,016 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44435 2023-07-14 17:14:00,018 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44435 2023-07-14 17:14:00,021 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44435 2023-07-14 17:14:00,026 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44435 2023-07-14 17:14:00,027 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44435 2023-07-14 17:14:00,029 INFO [Listener at localhost.localdomain/39045] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:14:00,029 INFO [Listener at localhost.localdomain/39045] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:14:00,029 INFO [Listener at localhost.localdomain/39045] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:14:00,030 INFO [Listener at localhost.localdomain/39045] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 17:14:00,030 INFO [Listener at localhost.localdomain/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:14:00,030 INFO [Listener at localhost.localdomain/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:14:00,030 INFO [Listener at localhost.localdomain/39045] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:14:00,031 INFO [Listener at localhost.localdomain/39045] http.HttpServer(1146): Jetty bound to port 35439 2023-07-14 17:14:00,032 INFO [Listener at localhost.localdomain/39045] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:14:00,035 INFO [Listener at localhost.localdomain/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:00,035 INFO [Listener at localhost.localdomain/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2514afd1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:14:00,036 INFO [Listener at localhost.localdomain/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:00,036 INFO [Listener at localhost.localdomain/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4f4c4036{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:14:00,042 INFO [Listener at localhost.localdomain/39045] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:14:00,042 INFO [Listener at localhost.localdomain/39045] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:14:00,043 INFO [Listener at localhost.localdomain/39045] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:14:00,043 INFO [Listener at localhost.localdomain/39045] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 17:14:00,046 INFO [Listener at localhost.localdomain/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:00,047 INFO [Listener at localhost.localdomain/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@700c56c4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:00,048 INFO [Listener at localhost.localdomain/39045] server.AbstractConnector(333): Started ServerConnector@13c688b1{HTTP/1.1, (http/1.1)}{0.0.0.0:35439} 2023-07-14 17:14:00,049 INFO [Listener at localhost.localdomain/39045] server.Server(415): Started @38698ms 2023-07-14 17:14:00,061 INFO [Listener at localhost.localdomain/39045] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:14:00,061 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:00,062 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:00,062 INFO [Listener at localhost.localdomain/39045] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:14:00,062 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:00,062 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:14:00,062 INFO [Listener at localhost.localdomain/39045] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:14:00,063 INFO [Listener at localhost.localdomain/39045] ipc.NettyRpcServer(120): Bind to /148.251.75.209:40287 2023-07-14 17:14:00,064 INFO [Listener at localhost.localdomain/39045] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 17:14:00,065 DEBUG [Listener at localhost.localdomain/39045] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 17:14:00,066 INFO [Listener at localhost.localdomain/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:00,067 INFO [Listener at localhost.localdomain/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:00,068 INFO [Listener at localhost.localdomain/39045] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40287 connecting to ZooKeeper ensemble=127.0.0.1:56537 2023-07-14 17:14:00,120 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:402870x0, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:14:00,122 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ZKUtil(164): regionserver:402870x0, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:14:00,122 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40287-0x1008c79a3240002 connected 2023-07-14 17:14:00,123 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ZKUtil(164): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:00,125 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ZKUtil(164): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:14:00,130 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40287 2023-07-14 17:14:00,131 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40287 2023-07-14 17:14:00,132 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40287 2023-07-14 17:14:00,133 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40287 2023-07-14 17:14:00,138 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40287 2023-07-14 17:14:00,141 INFO [Listener at localhost.localdomain/39045] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:14:00,141 INFO [Listener at localhost.localdomain/39045] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:14:00,142 INFO [Listener at localhost.localdomain/39045] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:14:00,142 INFO [Listener at localhost.localdomain/39045] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 17:14:00,142 INFO [Listener at localhost.localdomain/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:14:00,142 INFO [Listener at localhost.localdomain/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:14:00,143 INFO [Listener at localhost.localdomain/39045] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:14:00,143 INFO [Listener at localhost.localdomain/39045] http.HttpServer(1146): Jetty bound to port 33619 2023-07-14 17:14:00,143 INFO [Listener at localhost.localdomain/39045] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:14:00,165 INFO [Listener at localhost.localdomain/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:00,165 INFO [Listener at localhost.localdomain/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@77ce8649{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:14:00,166 INFO [Listener at localhost.localdomain/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:00,166 INFO [Listener at localhost.localdomain/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@465b158d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:14:00,173 INFO [Listener at localhost.localdomain/39045] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:14:00,175 INFO [Listener at localhost.localdomain/39045] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:14:00,175 INFO [Listener at localhost.localdomain/39045] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:14:00,176 INFO [Listener at localhost.localdomain/39045] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 17:14:00,179 INFO [Listener at localhost.localdomain/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:00,180 INFO [Listener at localhost.localdomain/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1823abf3{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:00,183 INFO [Listener at localhost.localdomain/39045] server.AbstractConnector(333): Started ServerConnector@3f25bf96{HTTP/1.1, (http/1.1)}{0.0.0.0:33619} 2023-07-14 17:14:00,183 INFO [Listener at localhost.localdomain/39045] server.Server(415): Started @38832ms 2023-07-14 17:14:00,194 INFO [Listener at localhost.localdomain/39045] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:14:00,195 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:00,195 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:00,195 INFO [Listener at localhost.localdomain/39045] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:14:00,196 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:00,196 INFO [Listener at localhost.localdomain/39045] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:14:00,196 INFO [Listener at localhost.localdomain/39045] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:14:00,199 INFO [Listener at localhost.localdomain/39045] ipc.NettyRpcServer(120): Bind to /148.251.75.209:37213 2023-07-14 17:14:00,200 INFO [Listener at localhost.localdomain/39045] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 17:14:00,203 DEBUG [Listener at localhost.localdomain/39045] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 17:14:00,204 INFO [Listener at localhost.localdomain/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:00,206 INFO [Listener at localhost.localdomain/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:00,207 INFO [Listener at localhost.localdomain/39045] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37213 connecting to ZooKeeper ensemble=127.0.0.1:56537 2023-07-14 17:14:00,210 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:372130x0, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:14:00,213 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ZKUtil(164): regionserver:372130x0, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:14:00,213 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37213-0x1008c79a3240003 connected 2023-07-14 17:14:00,214 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ZKUtil(164): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:00,215 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ZKUtil(164): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:14:00,216 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37213 2023-07-14 17:14:00,217 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37213 2023-07-14 17:14:00,218 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37213 2023-07-14 17:14:00,219 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37213 2023-07-14 17:14:00,219 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37213 2023-07-14 17:14:00,221 INFO [Listener at localhost.localdomain/39045] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:14:00,222 INFO [Listener at localhost.localdomain/39045] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:14:00,222 INFO [Listener at localhost.localdomain/39045] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:14:00,223 INFO [Listener at localhost.localdomain/39045] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 17:14:00,223 INFO [Listener at localhost.localdomain/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:14:00,223 INFO [Listener at localhost.localdomain/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:14:00,223 INFO [Listener at localhost.localdomain/39045] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:14:00,224 INFO [Listener at localhost.localdomain/39045] http.HttpServer(1146): Jetty bound to port 33913 2023-07-14 17:14:00,224 INFO [Listener at localhost.localdomain/39045] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:14:00,226 INFO [Listener at localhost.localdomain/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:00,227 INFO [Listener at localhost.localdomain/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1de19e3f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:14:00,227 INFO [Listener at localhost.localdomain/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:00,227 INFO [Listener at localhost.localdomain/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2016266d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:14:00,233 INFO [Listener at localhost.localdomain/39045] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:14:00,234 INFO [Listener at localhost.localdomain/39045] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:14:00,234 INFO [Listener at localhost.localdomain/39045] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:14:00,235 INFO [Listener at localhost.localdomain/39045] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-14 17:14:00,240 INFO [Listener at localhost.localdomain/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:00,241 INFO [Listener at localhost.localdomain/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@467aab1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:00,243 INFO [Listener at localhost.localdomain/39045] server.AbstractConnector(333): Started ServerConnector@3a4603bc{HTTP/1.1, (http/1.1)}{0.0.0.0:33913} 2023-07-14 17:14:00,243 INFO [Listener at localhost.localdomain/39045] server.Server(415): Started @38892ms 2023-07-14 17:14:00,245 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:14:00,259 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@608691bf{HTTP/1.1, (http/1.1)}{0.0.0.0:39263} 2023-07-14 17:14:00,259 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(415): Started @38908ms 2023-07-14 17:14:00,259 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,44713,1689354839774 2023-07-14 17:14:00,260 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-14 17:14:00,261 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,44713,1689354839774 2023-07-14 17:14:00,261 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 17:14:00,261 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 17:14:00,262 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 17:14:00,261 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 17:14:00,263 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:00,264 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 17:14:00,265 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 17:14:00,265 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,44713,1689354839774 from backup master directory 2023-07-14 17:14:00,265 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,44713,1689354839774 2023-07-14 17:14:00,265 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-14 17:14:00,265 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:14:00,266 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,44713,1689354839774 2023-07-14 17:14:00,284 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/hbase.id with ID: 6e713b61-c3e8-4468-8dff-a8a8449d3636 2023-07-14 17:14:00,293 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:00,295 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:00,306 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1d35ecfe to 127.0.0.1:56537 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:14:00,313 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@59457c6c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:14:00,313 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:14:00,314 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-14 17:14:00,314 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:14:00,315 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/MasterData/data/master/store-tmp 2023-07-14 17:14:00,326 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:00,326 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-14 17:14:00,326 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:00,327 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:00,327 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-14 17:14:00,327 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:00,327 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:00,327 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 17:14:00,327 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/MasterData/WALs/jenkins-hbase20.apache.org,44713,1689354839774 2023-07-14 17:14:00,331 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44713%2C1689354839774, suffix=, logDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/MasterData/WALs/jenkins-hbase20.apache.org,44713,1689354839774, archiveDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/MasterData/oldWALs, maxLogs=10 2023-07-14 17:14:00,347 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41529,DS-e55ea2d3-c34d-43b2-8489-7a133a83d725,DISK] 2023-07-14 17:14:00,348 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35273,DS-67e82162-dcd3-4d5a-b207-877faa5b6e55,DISK] 2023-07-14 17:14:00,349 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40965,DS-8fc74cd8-06ac-4991-a9a6-6fadf564bcf4,DISK] 2023-07-14 17:14:00,351 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/MasterData/WALs/jenkins-hbase20.apache.org,44713,1689354839774/jenkins-hbase20.apache.org%2C44713%2C1689354839774.1689354840331 2023-07-14 17:14:00,351 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41529,DS-e55ea2d3-c34d-43b2-8489-7a133a83d725,DISK], DatanodeInfoWithStorage[127.0.0.1:35273,DS-67e82162-dcd3-4d5a-b207-877faa5b6e55,DISK], DatanodeInfoWithStorage[127.0.0.1:40965,DS-8fc74cd8-06ac-4991-a9a6-6fadf564bcf4,DISK]] 2023-07-14 17:14:00,352 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:14:00,352 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:00,352 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:14:00,352 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:14:00,355 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:14:00,356 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-14 17:14:00,357 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-14 17:14:00,357 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:00,358 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:14:00,358 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:14:00,361 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:14:00,362 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:14:00,363 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9459331200, jitterRate=-0.11903113126754761}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:14:00,363 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 17:14:00,363 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-14 17:14:00,364 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-14 17:14:00,364 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-14 17:14:00,365 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-14 17:14:00,365 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-14 17:14:00,365 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-14 17:14:00,365 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-14 17:14:00,366 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-14 17:14:00,367 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-14 17:14:00,368 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-14 17:14:00,368 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-14 17:14:00,368 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-14 17:14:00,370 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:00,370 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-14 17:14:00,371 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-14 17:14:00,372 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-14 17:14:00,373 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:00,373 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:00,373 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:00,373 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:00,373 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:00,373 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,44713,1689354839774, sessionid=0x1008c79a3240000, setting cluster-up flag (Was=false) 2023-07-14 17:14:00,377 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:00,379 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-14 17:14:00,380 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44713,1689354839774 2023-07-14 17:14:00,382 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:00,385 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-14 17:14:00,386 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44713,1689354839774 2023-07-14 17:14:00,387 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.hbase-snapshot/.tmp 2023-07-14 17:14:00,388 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-14 17:14:00,388 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-14 17:14:00,392 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-14 17:14:00,392 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44713,1689354839774] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 17:14:00,392 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-14 17:14:00,393 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-14 17:14:00,394 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-14 17:14:00,403 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-14 17:14:00,403 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-14 17:14:00,404 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-14 17:14:00,404 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-14 17:14:00,404 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-14 17:14:00,404 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-14 17:14:00,404 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-14 17:14:00,404 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-14 17:14:00,404 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-07-14 17:14:00,404 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,404 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:14:00,404 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,415 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689354870415 2023-07-14 17:14:00,415 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-14 17:14:00,418 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-14 17:14:00,418 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-14 17:14:00,419 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-14 17:14:00,419 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-14 17:14:00,419 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-14 17:14:00,419 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-14 17:14:00,419 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-14 17:14:00,420 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-14 17:14:00,423 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,435 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-14 17:14:00,436 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-14 17:14:00,436 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-14 17:14:00,439 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-14 17:14:00,439 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-14 17:14:00,440 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689354840439,5,FailOnTimeoutGroup] 2023-07-14 17:14:00,441 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689354840440,5,FailOnTimeoutGroup] 2023-07-14 17:14:00,442 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,442 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-14 17:14:00,442 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,442 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,447 INFO [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(951): ClusterId : 6e713b61-c3e8-4468-8dff-a8a8449d3636 2023-07-14 17:14:00,447 INFO [RS:2;jenkins-hbase20:37213] regionserver.HRegionServer(951): ClusterId : 6e713b61-c3e8-4468-8dff-a8a8449d3636 2023-07-14 17:14:00,447 INFO [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(951): ClusterId : 6e713b61-c3e8-4468-8dff-a8a8449d3636 2023-07-14 17:14:00,451 DEBUG [RS:1;jenkins-hbase20:40287] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 17:14:00,451 DEBUG [RS:0;jenkins-hbase20:44435] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 17:14:00,451 DEBUG [RS:2;jenkins-hbase20:37213] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 17:14:00,453 DEBUG [RS:2;jenkins-hbase20:37213] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 17:14:00,453 DEBUG [RS:2;jenkins-hbase20:37213] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 17:14:00,453 DEBUG [RS:0;jenkins-hbase20:44435] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 17:14:00,453 DEBUG [RS:0;jenkins-hbase20:44435] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 17:14:00,457 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-14 17:14:00,457 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-14 17:14:00,457 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65 2023-07-14 17:14:00,460 DEBUG [RS:1;jenkins-hbase20:40287] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 17:14:00,460 DEBUG [RS:1;jenkins-hbase20:40287] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 17:14:00,461 DEBUG [RS:2;jenkins-hbase20:37213] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 17:14:00,463 DEBUG [RS:1;jenkins-hbase20:40287] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 17:14:00,465 DEBUG [RS:0;jenkins-hbase20:44435] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 17:14:00,465 DEBUG [RS:2;jenkins-hbase20:37213] zookeeper.ReadOnlyZKClient(139): Connect 0x43a275c2 to 127.0.0.1:56537 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:14:00,465 DEBUG [RS:1;jenkins-hbase20:40287] zookeeper.ReadOnlyZKClient(139): Connect 0x316f5206 to 127.0.0.1:56537 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:14:00,466 DEBUG [RS:0;jenkins-hbase20:44435] zookeeper.ReadOnlyZKClient(139): Connect 0x6599bd37 to 127.0.0.1:56537 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:14:00,476 DEBUG [RS:2;jenkins-hbase20:37213] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d6593c6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:14:00,476 DEBUG [RS:0;jenkins-hbase20:44435] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b5e4ff4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:14:00,477 DEBUG [RS:0;jenkins-hbase20:44435] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a6345cf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:14:00,477 DEBUG [RS:2;jenkins-hbase20:37213] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@c5899fd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:14:00,480 DEBUG [RS:1;jenkins-hbase20:40287] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3fa8aa5b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:14:00,480 DEBUG [RS:1;jenkins-hbase20:40287] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b973976, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:14:00,482 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:00,483 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 17:14:00,485 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/info 2023-07-14 17:14:00,485 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 17:14:00,486 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:00,486 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 17:14:00,487 DEBUG [RS:0;jenkins-hbase20:44435] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:44435 2023-07-14 17:14:00,487 INFO [RS:0;jenkins-hbase20:44435] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 17:14:00,487 INFO [RS:0;jenkins-hbase20:44435] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 17:14:00,487 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/rep_barrier 2023-07-14 17:14:00,487 DEBUG [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 17:14:00,488 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 17:14:00,488 INFO [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,44713,1689354839774 with isa=jenkins-hbase20.apache.org/148.251.75.209:44435, startcode=1689354839977 2023-07-14 17:14:00,488 DEBUG [RS:0;jenkins-hbase20:44435] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 17:14:00,488 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:00,488 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 17:14:00,490 DEBUG [RS:1;jenkins-hbase20:40287] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:40287 2023-07-14 17:14:00,490 INFO [RS:1;jenkins-hbase20:40287] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 17:14:00,490 INFO [RS:1;jenkins-hbase20:40287] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 17:14:00,490 DEBUG [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 17:14:00,490 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/table 2023-07-14 17:14:00,490 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:53901, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 17:14:00,490 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 17:14:00,490 DEBUG [RS:2;jenkins-hbase20:37213] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase20:37213 2023-07-14 17:14:00,490 INFO [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,44713,1689354839774 with isa=jenkins-hbase20.apache.org/148.251.75.209:40287, startcode=1689354840061 2023-07-14 17:14:00,490 INFO [RS:2;jenkins-hbase20:37213] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 17:14:00,490 INFO [RS:2;jenkins-hbase20:37213] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 17:14:00,490 DEBUG [RS:2;jenkins-hbase20:37213] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 17:14:00,490 DEBUG [RS:1;jenkins-hbase20:40287] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 17:14:00,492 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44713] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:00,492 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44713,1689354839774] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 17:14:00,492 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:00,493 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44713,1689354839774] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-14 17:14:00,493 INFO [RS:2;jenkins-hbase20:37213] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,44713,1689354839774 with isa=jenkins-hbase20.apache.org/148.251.75.209:37213, startcode=1689354840194 2023-07-14 17:14:00,493 DEBUG [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65 2023-07-14 17:14:00,493 DEBUG [RS:2;jenkins-hbase20:37213] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 17:14:00,493 DEBUG [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38043 2023-07-14 17:14:00,494 DEBUG [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35509 2023-07-14 17:14:00,494 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740 2023-07-14 17:14:00,494 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:58535, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 17:14:00,495 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39039, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 17:14:00,495 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44713] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:00,495 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740 2023-07-14 17:14:00,495 DEBUG [RS:0;jenkins-hbase20:44435] zookeeper.ZKUtil(162): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:00,495 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:00,495 WARN [RS:0;jenkins-hbase20:44435] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:14:00,495 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44713] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:00,495 INFO [RS:0;jenkins-hbase20:44435] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:14:00,495 DEBUG [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65 2023-07-14 17:14:00,495 DEBUG [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/WALs/jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:00,495 DEBUG [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38043 2023-07-14 17:14:00,495 DEBUG [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35509 2023-07-14 17:14:00,495 DEBUG [RS:2;jenkins-hbase20:37213] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65 2023-07-14 17:14:00,495 DEBUG [RS:2;jenkins-hbase20:37213] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38043 2023-07-14 17:14:00,495 DEBUG [RS:2;jenkins-hbase20:37213] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35509 2023-07-14 17:14:00,496 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44713,1689354839774] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 17:14:00,496 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44713,1689354839774] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-14 17:14:00,500 DEBUG [RS:1;jenkins-hbase20:40287] zookeeper.ZKUtil(162): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:00,500 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,40287,1689354840061] 2023-07-14 17:14:00,500 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,44435,1689354839977] 2023-07-14 17:14:00,500 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,37213,1689354840194] 2023-07-14 17:14:00,500 WARN [RS:1;jenkins-hbase20:40287] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:14:00,500 DEBUG [RS:2;jenkins-hbase20:37213] zookeeper.ZKUtil(162): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:00,500 INFO [RS:1;jenkins-hbase20:40287] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:14:00,500 WARN [RS:2;jenkins-hbase20:37213] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:14:00,500 INFO [RS:2;jenkins-hbase20:37213] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:14:00,500 DEBUG [RS:2;jenkins-hbase20:37213] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/WALs/jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:00,500 DEBUG [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/WALs/jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:00,501 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 17:14:00,506 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 17:14:00,509 DEBUG [RS:0;jenkins-hbase20:44435] zookeeper.ZKUtil(162): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:00,510 DEBUG [RS:0;jenkins-hbase20:44435] zookeeper.ZKUtil(162): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:00,510 DEBUG [RS:2;jenkins-hbase20:37213] zookeeper.ZKUtil(162): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:00,511 DEBUG [RS:2;jenkins-hbase20:37213] zookeeper.ZKUtil(162): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:00,511 DEBUG [RS:0;jenkins-hbase20:44435] zookeeper.ZKUtil(162): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:00,511 DEBUG [RS:1;jenkins-hbase20:40287] zookeeper.ZKUtil(162): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:00,512 DEBUG [RS:2;jenkins-hbase20:37213] zookeeper.ZKUtil(162): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:00,512 DEBUG [RS:1;jenkins-hbase20:40287] zookeeper.ZKUtil(162): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:00,512 DEBUG [RS:1;jenkins-hbase20:40287] zookeeper.ZKUtil(162): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:00,512 DEBUG [RS:0;jenkins-hbase20:44435] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 17:14:00,512 DEBUG [RS:2;jenkins-hbase20:37213] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 17:14:00,513 INFO [RS:0;jenkins-hbase20:44435] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 17:14:00,513 INFO [RS:2;jenkins-hbase20:37213] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 17:14:00,513 DEBUG [RS:1;jenkins-hbase20:40287] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 17:14:00,514 INFO [RS:1;jenkins-hbase20:40287] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 17:14:00,514 INFO [RS:0;jenkins-hbase20:44435] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 17:14:00,514 INFO [RS:0;jenkins-hbase20:44435] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 17:14:00,514 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:14:00,514 INFO [RS:0;jenkins-hbase20:44435] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,515 INFO [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 17:14:00,516 INFO [RS:2;jenkins-hbase20:37213] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 17:14:00,517 INFO [RS:2;jenkins-hbase20:37213] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 17:14:00,517 INFO [RS:2;jenkins-hbase20:37213] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,517 INFO [RS:2;jenkins-hbase20:37213] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 17:14:00,518 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11834817440, jitterRate=0.10220326483249664}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 17:14:00,518 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 17:14:00,518 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 17:14:00,518 INFO [RS:1;jenkins-hbase20:40287] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 17:14:00,518 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 17:14:00,518 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 17:14:00,519 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 17:14:00,519 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 17:14:00,519 INFO [RS:1;jenkins-hbase20:40287] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 17:14:00,519 INFO [RS:1;jenkins-hbase20:40287] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,519 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 17:14:00,519 INFO [RS:2;jenkins-hbase20:37213] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,519 INFO [RS:0;jenkins-hbase20:44435] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,519 INFO [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 17:14:00,520 DEBUG [RS:2;jenkins-hbase20:37213] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,520 DEBUG [RS:0;jenkins-hbase20:44435] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,520 DEBUG [RS:2;jenkins-hbase20:37213] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,520 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 17:14:00,520 DEBUG [RS:2;jenkins-hbase20:37213] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,520 DEBUG [RS:0;jenkins-hbase20:44435] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,520 DEBUG [RS:2;jenkins-hbase20:37213] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,520 DEBUG [RS:0;jenkins-hbase20:44435] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,520 DEBUG [RS:0;jenkins-hbase20:44435] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,520 DEBUG [RS:0;jenkins-hbase20:44435] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,521 DEBUG [RS:0;jenkins-hbase20:44435] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:14:00,520 DEBUG [RS:2;jenkins-hbase20:37213] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,521 DEBUG [RS:2;jenkins-hbase20:37213] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:14:00,521 DEBUG [RS:2;jenkins-hbase20:37213] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,521 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-14 17:14:00,521 DEBUG [RS:0;jenkins-hbase20:44435] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,521 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-14 17:14:00,521 DEBUG [RS:2;jenkins-hbase20:37213] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,521 INFO [RS:1;jenkins-hbase20:40287] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,521 DEBUG [RS:2;jenkins-hbase20:37213] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,521 DEBUG [RS:0;jenkins-hbase20:44435] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,522 DEBUG [RS:2;jenkins-hbase20:37213] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,522 DEBUG [RS:0;jenkins-hbase20:44435] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,522 DEBUG [RS:1;jenkins-hbase20:40287] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,521 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-14 17:14:00,522 DEBUG [RS:1;jenkins-hbase20:40287] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,522 DEBUG [RS:0;jenkins-hbase20:44435] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,522 DEBUG [RS:1;jenkins-hbase20:40287] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,522 DEBUG [RS:1;jenkins-hbase20:40287] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,522 DEBUG [RS:1;jenkins-hbase20:40287] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,522 DEBUG [RS:1;jenkins-hbase20:40287] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:14:00,522 DEBUG [RS:1;jenkins-hbase20:40287] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,523 DEBUG [RS:1;jenkins-hbase20:40287] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,529 DEBUG [RS:1;jenkins-hbase20:40287] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,529 INFO [RS:2;jenkins-hbase20:37213] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,529 DEBUG [RS:1;jenkins-hbase20:40287] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:00,529 INFO [RS:2;jenkins-hbase20:37213] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,529 INFO [RS:2;jenkins-hbase20:37213] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,529 INFO [RS:2;jenkins-hbase20:37213] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,529 INFO [RS:0;jenkins-hbase20:44435] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,530 INFO [RS:0;jenkins-hbase20:44435] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,530 INFO [RS:0;jenkins-hbase20:44435] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,530 INFO [RS:0;jenkins-hbase20:44435] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,531 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-14 17:14:00,531 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-14 17:14:00,534 INFO [RS:1;jenkins-hbase20:40287] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,534 INFO [RS:1;jenkins-hbase20:40287] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,534 INFO [RS:1;jenkins-hbase20:40287] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,534 INFO [RS:1;jenkins-hbase20:40287] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,547 INFO [RS:0;jenkins-hbase20:44435] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 17:14:00,548 INFO [RS:0;jenkins-hbase20:44435] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44435,1689354839977-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,548 INFO [RS:2;jenkins-hbase20:37213] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 17:14:00,548 INFO [RS:2;jenkins-hbase20:37213] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37213,1689354840194-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,550 INFO [RS:1;jenkins-hbase20:40287] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 17:14:00,551 INFO [RS:1;jenkins-hbase20:40287] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40287,1689354840061-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,558 INFO [RS:0;jenkins-hbase20:44435] regionserver.Replication(203): jenkins-hbase20.apache.org,44435,1689354839977 started 2023-07-14 17:14:00,558 INFO [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,44435,1689354839977, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:44435, sessionid=0x1008c79a3240001 2023-07-14 17:14:00,559 DEBUG [RS:0;jenkins-hbase20:44435] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 17:14:00,559 DEBUG [RS:0;jenkins-hbase20:44435] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:00,559 DEBUG [RS:0;jenkins-hbase20:44435] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44435,1689354839977' 2023-07-14 17:14:00,559 DEBUG [RS:0;jenkins-hbase20:44435] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 17:14:00,559 DEBUG [RS:0;jenkins-hbase20:44435] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 17:14:00,560 DEBUG [RS:0;jenkins-hbase20:44435] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 17:14:00,560 DEBUG [RS:0;jenkins-hbase20:44435] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 17:14:00,560 DEBUG [RS:0;jenkins-hbase20:44435] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:00,560 DEBUG [RS:0;jenkins-hbase20:44435] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44435,1689354839977' 2023-07-14 17:14:00,560 DEBUG [RS:0;jenkins-hbase20:44435] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 17:14:00,560 DEBUG [RS:0;jenkins-hbase20:44435] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 17:14:00,561 DEBUG [RS:0;jenkins-hbase20:44435] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 17:14:00,561 INFO [RS:2;jenkins-hbase20:37213] regionserver.Replication(203): jenkins-hbase20.apache.org,37213,1689354840194 started 2023-07-14 17:14:00,561 INFO [RS:0;jenkins-hbase20:44435] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-14 17:14:00,561 INFO [RS:2;jenkins-hbase20:37213] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,37213,1689354840194, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:37213, sessionid=0x1008c79a3240003 2023-07-14 17:14:00,561 DEBUG [RS:2;jenkins-hbase20:37213] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 17:14:00,561 DEBUG [RS:2;jenkins-hbase20:37213] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:00,561 DEBUG [RS:2;jenkins-hbase20:37213] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,37213,1689354840194' 2023-07-14 17:14:00,561 DEBUG [RS:2;jenkins-hbase20:37213] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 17:14:00,562 DEBUG [RS:2;jenkins-hbase20:37213] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 17:14:00,562 DEBUG [RS:2;jenkins-hbase20:37213] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 17:14:00,562 DEBUG [RS:2;jenkins-hbase20:37213] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 17:14:00,563 DEBUG [RS:2;jenkins-hbase20:37213] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:00,563 DEBUG [RS:2;jenkins-hbase20:37213] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,37213,1689354840194' 2023-07-14 17:14:00,563 DEBUG [RS:2;jenkins-hbase20:37213] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 17:14:00,563 DEBUG [RS:2;jenkins-hbase20:37213] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 17:14:00,563 DEBUG [RS:2;jenkins-hbase20:37213] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 17:14:00,563 INFO [RS:2;jenkins-hbase20:37213] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-14 17:14:00,564 INFO [RS:0;jenkins-hbase20:44435] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,564 INFO [RS:2;jenkins-hbase20:37213] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,564 DEBUG [RS:0;jenkins-hbase20:44435] zookeeper.ZKUtil(398): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-14 17:14:00,564 INFO [RS:0;jenkins-hbase20:44435] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-14 17:14:00,565 INFO [RS:0;jenkins-hbase20:44435] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,565 INFO [RS:0;jenkins-hbase20:44435] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,566 DEBUG [RS:2;jenkins-hbase20:37213] zookeeper.ZKUtil(398): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-14 17:14:00,566 INFO [RS:2;jenkins-hbase20:37213] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-14 17:14:00,567 INFO [RS:2;jenkins-hbase20:37213] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,567 INFO [RS:2;jenkins-hbase20:37213] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,571 INFO [RS:1;jenkins-hbase20:40287] regionserver.Replication(203): jenkins-hbase20.apache.org,40287,1689354840061 started 2023-07-14 17:14:00,571 INFO [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,40287,1689354840061, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:40287, sessionid=0x1008c79a3240002 2023-07-14 17:14:00,571 DEBUG [RS:1;jenkins-hbase20:40287] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 17:14:00,571 DEBUG [RS:1;jenkins-hbase20:40287] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:00,571 DEBUG [RS:1;jenkins-hbase20:40287] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,40287,1689354840061' 2023-07-14 17:14:00,571 DEBUG [RS:1;jenkins-hbase20:40287] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 17:14:00,572 DEBUG [RS:1;jenkins-hbase20:40287] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 17:14:00,572 DEBUG [RS:1;jenkins-hbase20:40287] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 17:14:00,572 DEBUG [RS:1;jenkins-hbase20:40287] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 17:14:00,572 DEBUG [RS:1;jenkins-hbase20:40287] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:00,572 DEBUG [RS:1;jenkins-hbase20:40287] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,40287,1689354840061' 2023-07-14 17:14:00,572 DEBUG [RS:1;jenkins-hbase20:40287] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 17:14:00,573 DEBUG [RS:1;jenkins-hbase20:40287] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 17:14:00,573 DEBUG [RS:1;jenkins-hbase20:40287] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 17:14:00,573 INFO [RS:1;jenkins-hbase20:40287] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-14 17:14:00,573 INFO [RS:1;jenkins-hbase20:40287] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,574 DEBUG [RS:1;jenkins-hbase20:40287] zookeeper.ZKUtil(398): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-14 17:14:00,574 INFO [RS:1;jenkins-hbase20:40287] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-14 17:14:00,574 INFO [RS:1;jenkins-hbase20:40287] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,574 INFO [RS:1;jenkins-hbase20:40287] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,669 INFO [RS:0;jenkins-hbase20:44435] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44435%2C1689354839977, suffix=, logDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/WALs/jenkins-hbase20.apache.org,44435,1689354839977, archiveDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/oldWALs, maxLogs=32 2023-07-14 17:14:00,669 INFO [RS:2;jenkins-hbase20:37213] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C37213%2C1689354840194, suffix=, logDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/WALs/jenkins-hbase20.apache.org,37213,1689354840194, archiveDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/oldWALs, maxLogs=32 2023-07-14 17:14:00,677 INFO [RS:1;jenkins-hbase20:40287] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C40287%2C1689354840061, suffix=, logDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/WALs/jenkins-hbase20.apache.org,40287,1689354840061, archiveDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/oldWALs, maxLogs=32 2023-07-14 17:14:00,684 DEBUG [jenkins-hbase20:44713] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-14 17:14:00,685 DEBUG [jenkins-hbase20:44713] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:14:00,685 DEBUG [jenkins-hbase20:44713] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:14:00,685 DEBUG [jenkins-hbase20:44713] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:14:00,685 DEBUG [jenkins-hbase20:44713] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:14:00,685 DEBUG [jenkins-hbase20:44713] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:14:00,687 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44435,1689354839977, state=OPENING 2023-07-14 17:14:00,689 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-14 17:14:00,690 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41529,DS-e55ea2d3-c34d-43b2-8489-7a133a83d725,DISK] 2023-07-14 17:14:00,691 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35273,DS-67e82162-dcd3-4d5a-b207-877faa5b6e55,DISK] 2023-07-14 17:14:00,691 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40965,DS-8fc74cd8-06ac-4991-a9a6-6fadf564bcf4,DISK] 2023-07-14 17:14:00,695 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:00,696 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 17:14:00,697 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35273,DS-67e82162-dcd3-4d5a-b207-877faa5b6e55,DISK] 2023-07-14 17:14:00,697 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41529,DS-e55ea2d3-c34d-43b2-8489-7a133a83d725,DISK] 2023-07-14 17:14:00,697 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40965,DS-8fc74cd8-06ac-4991-a9a6-6fadf564bcf4,DISK] 2023-07-14 17:14:00,706 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44435,1689354839977}] 2023-07-14 17:14:00,711 INFO [RS:0;jenkins-hbase20:44435] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/WALs/jenkins-hbase20.apache.org,44435,1689354839977/jenkins-hbase20.apache.org%2C44435%2C1689354839977.1689354840670 2023-07-14 17:14:00,712 DEBUG [RS:0;jenkins-hbase20:44435] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41529,DS-e55ea2d3-c34d-43b2-8489-7a133a83d725,DISK], DatanodeInfoWithStorage[127.0.0.1:40965,DS-8fc74cd8-06ac-4991-a9a6-6fadf564bcf4,DISK], DatanodeInfoWithStorage[127.0.0.1:35273,DS-67e82162-dcd3-4d5a-b207-877faa5b6e55,DISK]] 2023-07-14 17:14:00,712 INFO [RS:2;jenkins-hbase20:37213] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/WALs/jenkins-hbase20.apache.org,37213,1689354840194/jenkins-hbase20.apache.org%2C37213%2C1689354840194.1689354840670 2023-07-14 17:14:00,713 WARN [ReadOnlyZKClient-127.0.0.1:56537@0x1d35ecfe] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-14 17:14:00,713 DEBUG [RS:2;jenkins-hbase20:37213] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41529,DS-e55ea2d3-c34d-43b2-8489-7a133a83d725,DISK], DatanodeInfoWithStorage[127.0.0.1:35273,DS-67e82162-dcd3-4d5a-b207-877faa5b6e55,DISK], DatanodeInfoWithStorage[127.0.0.1:40965,DS-8fc74cd8-06ac-4991-a9a6-6fadf564bcf4,DISK]] 2023-07-14 17:14:00,713 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44713,1689354839774] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:14:00,728 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40965,DS-8fc74cd8-06ac-4991-a9a6-6fadf564bcf4,DISK] 2023-07-14 17:14:00,728 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41529,DS-e55ea2d3-c34d-43b2-8489-7a133a83d725,DISK] 2023-07-14 17:14:00,728 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55300, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:14:00,728 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35273,DS-67e82162-dcd3-4d5a-b207-877faa5b6e55,DISK] 2023-07-14 17:14:00,729 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44435] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 148.251.75.209:55300 deadline: 1689354900728, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:00,731 INFO [RS:1;jenkins-hbase20:40287] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/WALs/jenkins-hbase20.apache.org,40287,1689354840061/jenkins-hbase20.apache.org%2C40287%2C1689354840061.1689354840678 2023-07-14 17:14:00,731 DEBUG [RS:1;jenkins-hbase20:40287] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41529,DS-e55ea2d3-c34d-43b2-8489-7a133a83d725,DISK], DatanodeInfoWithStorage[127.0.0.1:40965,DS-8fc74cd8-06ac-4991-a9a6-6fadf564bcf4,DISK], DatanodeInfoWithStorage[127.0.0.1:35273,DS-67e82162-dcd3-4d5a-b207-877faa5b6e55,DISK]] 2023-07-14 17:14:00,864 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:00,866 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 17:14:00,868 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55308, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 17:14:00,873 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-14 17:14:00,873 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:14:00,875 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44435%2C1689354839977.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/WALs/jenkins-hbase20.apache.org,44435,1689354839977, archiveDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/oldWALs, maxLogs=32 2023-07-14 17:14:00,894 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35273,DS-67e82162-dcd3-4d5a-b207-877faa5b6e55,DISK] 2023-07-14 17:14:00,894 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41529,DS-e55ea2d3-c34d-43b2-8489-7a133a83d725,DISK] 2023-07-14 17:14:00,895 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40965,DS-8fc74cd8-06ac-4991-a9a6-6fadf564bcf4,DISK] 2023-07-14 17:14:00,898 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/WALs/jenkins-hbase20.apache.org,44435,1689354839977/jenkins-hbase20.apache.org%2C44435%2C1689354839977.meta.1689354840875.meta 2023-07-14 17:14:00,899 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35273,DS-67e82162-dcd3-4d5a-b207-877faa5b6e55,DISK], DatanodeInfoWithStorage[127.0.0.1:41529,DS-e55ea2d3-c34d-43b2-8489-7a133a83d725,DISK], DatanodeInfoWithStorage[127.0.0.1:40965,DS-8fc74cd8-06ac-4991-a9a6-6fadf564bcf4,DISK]] 2023-07-14 17:14:00,899 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:14:00,899 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 17:14:00,899 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-14 17:14:00,899 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-14 17:14:00,899 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-14 17:14:00,899 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:00,900 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-14 17:14:00,900 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-14 17:14:00,901 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 17:14:00,902 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/info 2023-07-14 17:14:00,902 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/info 2023-07-14 17:14:00,902 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 17:14:00,903 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:00,903 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 17:14:00,904 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/rep_barrier 2023-07-14 17:14:00,904 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/rep_barrier 2023-07-14 17:14:00,904 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 17:14:00,904 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:00,904 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 17:14:00,905 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/table 2023-07-14 17:14:00,905 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/table 2023-07-14 17:14:00,905 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 17:14:00,906 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:00,907 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740 2023-07-14 17:14:00,908 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740 2023-07-14 17:14:00,910 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 17:14:00,911 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 17:14:00,912 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9951370880, jitterRate=-0.07320636510848999}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 17:14:00,912 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 17:14:00,913 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689354840864 2023-07-14 17:14:00,919 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-14 17:14:00,920 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-14 17:14:00,920 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44435,1689354839977, state=OPEN 2023-07-14 17:14:00,921 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 17:14:00,921 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 17:14:00,923 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-14 17:14:00,923 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44435,1689354839977 in 225 msec 2023-07-14 17:14:00,926 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-14 17:14:00,926 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 402 msec 2023-07-14 17:14:00,927 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 533 msec 2023-07-14 17:14:00,927 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689354840927, completionTime=-1 2023-07-14 17:14:00,927 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-14 17:14:00,927 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-14 17:14:00,933 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-14 17:14:00,933 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689354900933 2023-07-14 17:14:00,933 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689354960933 2023-07-14 17:14:00,933 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-14 17:14:00,939 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44713,1689354839774-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,939 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44713,1689354839774-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,940 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44713,1689354839774-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,940 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:44713, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,940 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:00,940 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-14 17:14:00,940 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-14 17:14:00,941 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-14 17:14:00,941 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-14 17:14:00,945 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:14:00,945 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:14:00,948 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/hbase/namespace/17de33c40105a51666cd874cf8eea882 2023-07-14 17:14:00,949 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/hbase/namespace/17de33c40105a51666cd874cf8eea882 empty. 2023-07-14 17:14:00,950 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/hbase/namespace/17de33c40105a51666cd874cf8eea882 2023-07-14 17:14:00,950 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-14 17:14:00,965 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-14 17:14:00,966 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 17de33c40105a51666cd874cf8eea882, NAME => 'hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp 2023-07-14 17:14:00,975 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:00,975 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 17de33c40105a51666cd874cf8eea882, disabling compactions & flushes 2023-07-14 17:14:00,975 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. 2023-07-14 17:14:00,975 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. 2023-07-14 17:14:00,975 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. after waiting 0 ms 2023-07-14 17:14:00,975 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. 2023-07-14 17:14:00,975 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. 2023-07-14 17:14:00,975 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 17de33c40105a51666cd874cf8eea882: 2023-07-14 17:14:00,977 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:14:00,978 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689354840978"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354840978"}]},"ts":"1689354840978"} 2023-07-14 17:14:00,981 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:14:00,981 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:14:00,982 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354840981"}]},"ts":"1689354840981"} 2023-07-14 17:14:00,983 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-14 17:14:00,985 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:14:00,985 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:14:00,985 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:14:00,985 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:14:00,985 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:14:00,986 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=17de33c40105a51666cd874cf8eea882, ASSIGN}] 2023-07-14 17:14:00,987 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=17de33c40105a51666cd874cf8eea882, ASSIGN 2023-07-14 17:14:00,988 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=17de33c40105a51666cd874cf8eea882, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44435,1689354839977; forceNewPlan=false, retain=false 2023-07-14 17:14:01,033 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44713,1689354839774] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:14:01,035 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44713,1689354839774] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-14 17:14:01,037 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:14:01,038 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:14:01,041 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/hbase/rsgroup/00874062ea90f3cb7a7f2fb7ef938534 2023-07-14 17:14:01,041 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/hbase/rsgroup/00874062ea90f3cb7a7f2fb7ef938534 empty. 2023-07-14 17:14:01,042 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/hbase/rsgroup/00874062ea90f3cb7a7f2fb7ef938534 2023-07-14 17:14:01,042 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-14 17:14:01,057 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-14 17:14:01,058 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 00874062ea90f3cb7a7f2fb7ef938534, NAME => 'hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp 2023-07-14 17:14:01,138 INFO [jenkins-hbase20:44713] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 17:14:01,140 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=17de33c40105a51666cd874cf8eea882, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:01,140 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689354841139"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354841139"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354841139"}]},"ts":"1689354841139"} 2023-07-14 17:14:01,141 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 17de33c40105a51666cd874cf8eea882, server=jenkins-hbase20.apache.org,44435,1689354839977}] 2023-07-14 17:14:01,297 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. 2023-07-14 17:14:01,297 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 17de33c40105a51666cd874cf8eea882, NAME => 'hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:14:01,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 17de33c40105a51666cd874cf8eea882 2023-07-14 17:14:01,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:01,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 17de33c40105a51666cd874cf8eea882 2023-07-14 17:14:01,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 17de33c40105a51666cd874cf8eea882 2023-07-14 17:14:01,299 INFO [StoreOpener-17de33c40105a51666cd874cf8eea882-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 17de33c40105a51666cd874cf8eea882 2023-07-14 17:14:01,301 DEBUG [StoreOpener-17de33c40105a51666cd874cf8eea882-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/namespace/17de33c40105a51666cd874cf8eea882/info 2023-07-14 17:14:01,301 DEBUG [StoreOpener-17de33c40105a51666cd874cf8eea882-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/namespace/17de33c40105a51666cd874cf8eea882/info 2023-07-14 17:14:01,301 INFO [StoreOpener-17de33c40105a51666cd874cf8eea882-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 17de33c40105a51666cd874cf8eea882 columnFamilyName info 2023-07-14 17:14:01,302 INFO [StoreOpener-17de33c40105a51666cd874cf8eea882-1] regionserver.HStore(310): Store=17de33c40105a51666cd874cf8eea882/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:01,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/namespace/17de33c40105a51666cd874cf8eea882 2023-07-14 17:14:01,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/namespace/17de33c40105a51666cd874cf8eea882 2023-07-14 17:14:01,306 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 17de33c40105a51666cd874cf8eea882 2023-07-14 17:14:01,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/namespace/17de33c40105a51666cd874cf8eea882/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:14:01,309 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 17de33c40105a51666cd874cf8eea882; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10022015680, jitterRate=-0.06662705540657043}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:14:01,309 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 17de33c40105a51666cd874cf8eea882: 2023-07-14 17:14:01,310 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882., pid=7, masterSystemTime=1689354841293 2023-07-14 17:14:01,313 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. 2023-07-14 17:14:01,313 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. 2023-07-14 17:14:01,313 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=17de33c40105a51666cd874cf8eea882, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:01,314 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689354841313"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354841313"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354841313"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354841313"}]},"ts":"1689354841313"} 2023-07-14 17:14:01,317 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-14 17:14:01,317 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 17de33c40105a51666cd874cf8eea882, server=jenkins-hbase20.apache.org,44435,1689354839977 in 174 msec 2023-07-14 17:14:01,318 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-14 17:14:01,318 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=17de33c40105a51666cd874cf8eea882, ASSIGN in 331 msec 2023-07-14 17:14:01,319 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:14:01,319 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354841319"}]},"ts":"1689354841319"} 2023-07-14 17:14:01,320 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-14 17:14:01,322 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:14:01,323 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 382 msec 2023-07-14 17:14:01,343 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-14 17:14:01,343 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-14 17:14:01,343 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:01,348 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-14 17:14:01,357 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 17:14:01,360 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-07-14 17:14:01,370 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-14 17:14:01,374 DEBUG [PEWorker-4] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-14 17:14:01,374 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-14 17:14:01,475 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:01,475 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 00874062ea90f3cb7a7f2fb7ef938534, disabling compactions & flushes 2023-07-14 17:14:01,475 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. 2023-07-14 17:14:01,475 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. 2023-07-14 17:14:01,475 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. after waiting 0 ms 2023-07-14 17:14:01,475 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. 2023-07-14 17:14:01,475 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. 2023-07-14 17:14:01,475 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 00874062ea90f3cb7a7f2fb7ef938534: 2023-07-14 17:14:01,477 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:14:01,479 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689354841478"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354841478"}]},"ts":"1689354841478"} 2023-07-14 17:14:01,480 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:14:01,481 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:14:01,481 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354841481"}]},"ts":"1689354841481"} 2023-07-14 17:14:01,483 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-14 17:14:01,485 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:14:01,485 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:14:01,485 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:14:01,486 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:14:01,486 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:14:01,486 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=00874062ea90f3cb7a7f2fb7ef938534, ASSIGN}] 2023-07-14 17:14:01,487 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=00874062ea90f3cb7a7f2fb7ef938534, ASSIGN 2023-07-14 17:14:01,487 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=00874062ea90f3cb7a7f2fb7ef938534, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,40287,1689354840061; forceNewPlan=false, retain=false 2023-07-14 17:14:01,638 INFO [jenkins-hbase20:44713] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 17:14:01,639 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=00874062ea90f3cb7a7f2fb7ef938534, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:01,639 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689354841639"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354841639"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354841639"}]},"ts":"1689354841639"} 2023-07-14 17:14:01,641 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 00874062ea90f3cb7a7f2fb7ef938534, server=jenkins-hbase20.apache.org,40287,1689354840061}] 2023-07-14 17:14:01,799 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:01,799 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 17:14:01,800 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:58514, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 17:14:01,805 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. 2023-07-14 17:14:01,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 00874062ea90f3cb7a7f2fb7ef938534, NAME => 'hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:14:01,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 17:14:01,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. service=MultiRowMutationService 2023-07-14 17:14:01,805 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-14 17:14:01,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 00874062ea90f3cb7a7f2fb7ef938534 2023-07-14 17:14:01,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:01,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 00874062ea90f3cb7a7f2fb7ef938534 2023-07-14 17:14:01,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 00874062ea90f3cb7a7f2fb7ef938534 2023-07-14 17:14:01,807 INFO [StoreOpener-00874062ea90f3cb7a7f2fb7ef938534-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 00874062ea90f3cb7a7f2fb7ef938534 2023-07-14 17:14:01,808 DEBUG [StoreOpener-00874062ea90f3cb7a7f2fb7ef938534-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/rsgroup/00874062ea90f3cb7a7f2fb7ef938534/m 2023-07-14 17:14:01,808 DEBUG [StoreOpener-00874062ea90f3cb7a7f2fb7ef938534-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/rsgroup/00874062ea90f3cb7a7f2fb7ef938534/m 2023-07-14 17:14:01,808 INFO [StoreOpener-00874062ea90f3cb7a7f2fb7ef938534-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 00874062ea90f3cb7a7f2fb7ef938534 columnFamilyName m 2023-07-14 17:14:01,809 INFO [StoreOpener-00874062ea90f3cb7a7f2fb7ef938534-1] regionserver.HStore(310): Store=00874062ea90f3cb7a7f2fb7ef938534/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:01,810 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/rsgroup/00874062ea90f3cb7a7f2fb7ef938534 2023-07-14 17:14:01,810 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/rsgroup/00874062ea90f3cb7a7f2fb7ef938534 2023-07-14 17:14:01,825 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 00874062ea90f3cb7a7f2fb7ef938534 2023-07-14 17:14:01,832 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/rsgroup/00874062ea90f3cb7a7f2fb7ef938534/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:14:01,833 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 00874062ea90f3cb7a7f2fb7ef938534; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@772c37f4, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:14:01,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 00874062ea90f3cb7a7f2fb7ef938534: 2023-07-14 17:14:01,833 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534., pid=11, masterSystemTime=1689354841799 2023-07-14 17:14:01,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. 2023-07-14 17:14:01,838 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. 2023-07-14 17:14:01,838 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=00874062ea90f3cb7a7f2fb7ef938534, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:01,838 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689354841838"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354841838"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354841838"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354841838"}]},"ts":"1689354841838"} 2023-07-14 17:14:01,842 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-07-14 17:14:01,842 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 00874062ea90f3cb7a7f2fb7ef938534, server=jenkins-hbase20.apache.org,40287,1689354840061 in 199 msec 2023-07-14 17:14:01,844 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=6 2023-07-14 17:14:01,844 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=00874062ea90f3cb7a7f2fb7ef938534, ASSIGN in 356 msec 2023-07-14 17:14:01,859 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 17:14:01,865 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 493 msec 2023-07-14 17:14:01,866 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:14:01,866 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354841866"}]},"ts":"1689354841866"} 2023-07-14 17:14:01,869 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-14 17:14:01,876 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-14 17:14:01,876 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:14:01,877 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-14 17:14:01,877 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.611sec 2023-07-14 17:14:01,878 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 843 msec 2023-07-14 17:14:01,879 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-14 17:14:01,879 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:14:01,880 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-14 17:14:01,880 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-14 17:14:01,884 INFO [master/jenkins-hbase20:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-14 17:14:01,886 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:14:01,887 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:14:01,889 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/hbase/quota/1a1eaf2f0a815c2ecbe2d392d2aa9600 2023-07-14 17:14:01,890 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/hbase/quota/1a1eaf2f0a815c2ecbe2d392d2aa9600 empty. 2023-07-14 17:14:01,890 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/hbase/quota/1a1eaf2f0a815c2ecbe2d392d2aa9600 2023-07-14 17:14:01,891 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-14 17:14:01,893 INFO [master/jenkins-hbase20:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-14 17:14:01,893 INFO [master/jenkins-hbase20:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-14 17:14:01,895 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:01,896 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:01,896 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-14 17:14:01,896 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-14 17:14:01,896 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44713,1689354839774-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-14 17:14:01,896 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44713,1689354839774-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-14 17:14:01,901 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-14 17:14:01,913 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-14 17:14:01,914 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1a1eaf2f0a815c2ecbe2d392d2aa9600, NAME => 'hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp 2023-07-14 17:14:01,926 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:01,926 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 1a1eaf2f0a815c2ecbe2d392d2aa9600, disabling compactions & flushes 2023-07-14 17:14:01,926 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. 2023-07-14 17:14:01,926 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. 2023-07-14 17:14:01,927 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. after waiting 0 ms 2023-07-14 17:14:01,927 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. 2023-07-14 17:14:01,927 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. 2023-07-14 17:14:01,927 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 1a1eaf2f0a815c2ecbe2d392d2aa9600: 2023-07-14 17:14:01,929 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:14:01,930 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689354841930"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354841930"}]},"ts":"1689354841930"} 2023-07-14 17:14:01,935 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:14:01,936 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:14:01,937 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354841937"}]},"ts":"1689354841937"} 2023-07-14 17:14:01,938 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-14 17:14:01,939 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44713,1689354839774] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:14:01,940 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:58530, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:14:01,941 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:14:01,944 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:14:01,944 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:14:01,944 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:14:01,944 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:14:01,944 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=1a1eaf2f0a815c2ecbe2d392d2aa9600, ASSIGN}] 2023-07-14 17:14:01,945 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44713,1689354839774] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-14 17:14:01,945 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44713,1689354839774] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-14 17:14:01,946 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=1a1eaf2f0a815c2ecbe2d392d2aa9600, ASSIGN 2023-07-14 17:14:01,947 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=1a1eaf2f0a815c2ecbe2d392d2aa9600, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,40287,1689354840061; forceNewPlan=false, retain=false 2023-07-14 17:14:01,952 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ReadOnlyZKClient(139): Connect 0x615cbeb2 to 127.0.0.1:56537 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:14:01,958 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:01,959 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44713,1689354839774] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:01,960 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44713,1689354839774] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 17:14:01,960 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,44713,1689354839774] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-14 17:14:01,975 DEBUG [Listener at localhost.localdomain/39045] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@385993d4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:14:01,976 DEBUG [hconnection-0x5519df56-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:14:01,980 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55320, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:14:01,982 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,44713,1689354839774 2023-07-14 17:14:01,983 INFO [Listener at localhost.localdomain/39045] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:01,985 DEBUG [Listener at localhost.localdomain/39045] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-14 17:14:01,988 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:57308, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-14 17:14:01,991 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-14 17:14:01,991 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:01,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=false 2023-07-14 17:14:01,992 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ReadOnlyZKClient(139): Connect 0x1c8ca688 to 127.0.0.1:56537 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:14:02,006 DEBUG [Listener at localhost.localdomain/39045] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7742920, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:14:02,007 INFO [Listener at localhost.localdomain/39045] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56537 2023-07-14 17:14:02,035 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:14:02,036 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1008c79a324000a connected 2023-07-14 17:14:02,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.HMaster$15(3014): Client=jenkins//148.251.75.209 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-14 17:14:02,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-14 17:14:02,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-14 17:14:02,051 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 17:14:02,054 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 15 msec 2023-07-14 17:14:02,097 INFO [jenkins-hbase20:44713] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 17:14:02,098 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=1a1eaf2f0a815c2ecbe2d392d2aa9600, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:02,098 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689354842098"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354842098"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354842098"}]},"ts":"1689354842098"} 2023-07-14 17:14:02,100 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure 1a1eaf2f0a815c2ecbe2d392d2aa9600, server=jenkins-hbase20.apache.org,40287,1689354840061}] 2023-07-14 17:14:02,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-14 17:14:02,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:14:02,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-14 17:14:02,161 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:14:02,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-14 17:14:02,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-14 17:14:02,164 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:02,164 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 17:14:02,166 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:14:02,167 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/np1/table1/27fc267f178f79fcc89c3ab94c985754 2023-07-14 17:14:02,168 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/np1/table1/27fc267f178f79fcc89c3ab94c985754 empty. 2023-07-14 17:14:02,169 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/np1/table1/27fc267f178f79fcc89c3ab94c985754 2023-07-14 17:14:02,169 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-14 17:14:02,197 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-14 17:14:02,198 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 27fc267f178f79fcc89c3ab94c985754, NAME => 'np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp 2023-07-14 17:14:02,215 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:02,215 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 27fc267f178f79fcc89c3ab94c985754, disabling compactions & flushes 2023-07-14 17:14:02,215 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754. 2023-07-14 17:14:02,215 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754. 2023-07-14 17:14:02,215 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754. after waiting 0 ms 2023-07-14 17:14:02,216 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754. 2023-07-14 17:14:02,216 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754. 2023-07-14 17:14:02,216 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 27fc267f178f79fcc89c3ab94c985754: 2023-07-14 17:14:02,218 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:14:02,219 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689354842218"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354842218"}]},"ts":"1689354842218"} 2023-07-14 17:14:02,220 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:14:02,222 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:14:02,222 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354842222"}]},"ts":"1689354842222"} 2023-07-14 17:14:02,224 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-14 17:14:02,230 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:14:02,230 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:14:02,230 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:14:02,230 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:14:02,230 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:14:02,230 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=27fc267f178f79fcc89c3ab94c985754, ASSIGN}] 2023-07-14 17:14:02,232 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=27fc267f178f79fcc89c3ab94c985754, ASSIGN 2023-07-14 17:14:02,234 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=27fc267f178f79fcc89c3ab94c985754, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,37213,1689354840194; forceNewPlan=false, retain=false 2023-07-14 17:14:02,256 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. 2023-07-14 17:14:02,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1a1eaf2f0a815c2ecbe2d392d2aa9600, NAME => 'hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:14:02,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 1a1eaf2f0a815c2ecbe2d392d2aa9600 2023-07-14 17:14:02,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:02,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1a1eaf2f0a815c2ecbe2d392d2aa9600 2023-07-14 17:14:02,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1a1eaf2f0a815c2ecbe2d392d2aa9600 2023-07-14 17:14:02,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-14 17:14:02,275 INFO [StoreOpener-1a1eaf2f0a815c2ecbe2d392d2aa9600-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 1a1eaf2f0a815c2ecbe2d392d2aa9600 2023-07-14 17:14:02,278 DEBUG [StoreOpener-1a1eaf2f0a815c2ecbe2d392d2aa9600-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/quota/1a1eaf2f0a815c2ecbe2d392d2aa9600/q 2023-07-14 17:14:02,278 DEBUG [StoreOpener-1a1eaf2f0a815c2ecbe2d392d2aa9600-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/quota/1a1eaf2f0a815c2ecbe2d392d2aa9600/q 2023-07-14 17:14:02,278 INFO [StoreOpener-1a1eaf2f0a815c2ecbe2d392d2aa9600-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1a1eaf2f0a815c2ecbe2d392d2aa9600 columnFamilyName q 2023-07-14 17:14:02,280 INFO [StoreOpener-1a1eaf2f0a815c2ecbe2d392d2aa9600-1] regionserver.HStore(310): Store=1a1eaf2f0a815c2ecbe2d392d2aa9600/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:02,281 INFO [StoreOpener-1a1eaf2f0a815c2ecbe2d392d2aa9600-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 1a1eaf2f0a815c2ecbe2d392d2aa9600 2023-07-14 17:14:02,284 DEBUG [StoreOpener-1a1eaf2f0a815c2ecbe2d392d2aa9600-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/quota/1a1eaf2f0a815c2ecbe2d392d2aa9600/u 2023-07-14 17:14:02,284 DEBUG [StoreOpener-1a1eaf2f0a815c2ecbe2d392d2aa9600-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/quota/1a1eaf2f0a815c2ecbe2d392d2aa9600/u 2023-07-14 17:14:02,285 INFO [StoreOpener-1a1eaf2f0a815c2ecbe2d392d2aa9600-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1a1eaf2f0a815c2ecbe2d392d2aa9600 columnFamilyName u 2023-07-14 17:14:02,286 INFO [StoreOpener-1a1eaf2f0a815c2ecbe2d392d2aa9600-1] regionserver.HStore(310): Store=1a1eaf2f0a815c2ecbe2d392d2aa9600/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:02,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/quota/1a1eaf2f0a815c2ecbe2d392d2aa9600 2023-07-14 17:14:02,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/quota/1a1eaf2f0a815c2ecbe2d392d2aa9600 2023-07-14 17:14:02,290 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-14 17:14:02,292 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1a1eaf2f0a815c2ecbe2d392d2aa9600 2023-07-14 17:14:02,294 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/quota/1a1eaf2f0a815c2ecbe2d392d2aa9600/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:14:02,294 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1a1eaf2f0a815c2ecbe2d392d2aa9600; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9813598560, jitterRate=-0.08603741228580475}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-14 17:14:02,294 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1a1eaf2f0a815c2ecbe2d392d2aa9600: 2023-07-14 17:14:02,296 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600., pid=15, masterSystemTime=1689354842251 2023-07-14 17:14:02,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. 2023-07-14 17:14:02,298 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. 2023-07-14 17:14:02,298 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=1a1eaf2f0a815c2ecbe2d392d2aa9600, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:02,299 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689354842298"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354842298"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354842298"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354842298"}]},"ts":"1689354842298"} 2023-07-14 17:14:02,303 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-14 17:14:02,303 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure 1a1eaf2f0a815c2ecbe2d392d2aa9600, server=jenkins-hbase20.apache.org,40287,1689354840061 in 200 msec 2023-07-14 17:14:02,306 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-14 17:14:02,306 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=1a1eaf2f0a815c2ecbe2d392d2aa9600, ASSIGN in 359 msec 2023-07-14 17:14:02,310 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:14:02,310 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354842310"}]},"ts":"1689354842310"} 2023-07-14 17:14:02,313 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-14 17:14:02,316 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:14:02,319 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 436 msec 2023-07-14 17:14:02,384 INFO [jenkins-hbase20:44713] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 17:14:02,386 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=27fc267f178f79fcc89c3ab94c985754, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:02,386 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689354842386"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354842386"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354842386"}]},"ts":"1689354842386"} 2023-07-14 17:14:02,391 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 27fc267f178f79fcc89c3ab94c985754, server=jenkins-hbase20.apache.org,37213,1689354840194}] 2023-07-14 17:14:02,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-14 17:14:02,544 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:02,544 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 17:14:02,545 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60326, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 17:14:02,552 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754. 2023-07-14 17:14:02,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 27fc267f178f79fcc89c3ab94c985754, NAME => 'np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:14:02,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 27fc267f178f79fcc89c3ab94c985754 2023-07-14 17:14:02,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:02,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 27fc267f178f79fcc89c3ab94c985754 2023-07-14 17:14:02,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 27fc267f178f79fcc89c3ab94c985754 2023-07-14 17:14:02,560 INFO [StoreOpener-27fc267f178f79fcc89c3ab94c985754-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 27fc267f178f79fcc89c3ab94c985754 2023-07-14 17:14:02,562 DEBUG [StoreOpener-27fc267f178f79fcc89c3ab94c985754-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/np1/table1/27fc267f178f79fcc89c3ab94c985754/fam1 2023-07-14 17:14:02,562 DEBUG [StoreOpener-27fc267f178f79fcc89c3ab94c985754-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/np1/table1/27fc267f178f79fcc89c3ab94c985754/fam1 2023-07-14 17:14:02,563 INFO [StoreOpener-27fc267f178f79fcc89c3ab94c985754-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 27fc267f178f79fcc89c3ab94c985754 columnFamilyName fam1 2023-07-14 17:14:02,563 INFO [StoreOpener-27fc267f178f79fcc89c3ab94c985754-1] regionserver.HStore(310): Store=27fc267f178f79fcc89c3ab94c985754/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:02,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/np1/table1/27fc267f178f79fcc89c3ab94c985754 2023-07-14 17:14:02,565 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/np1/table1/27fc267f178f79fcc89c3ab94c985754 2023-07-14 17:14:02,567 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 27fc267f178f79fcc89c3ab94c985754 2023-07-14 17:14:02,570 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/np1/table1/27fc267f178f79fcc89c3ab94c985754/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:14:02,571 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 27fc267f178f79fcc89c3ab94c985754; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11707352160, jitterRate=0.09033213555812836}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:14:02,571 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 27fc267f178f79fcc89c3ab94c985754: 2023-07-14 17:14:02,572 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754., pid=18, masterSystemTime=1689354842544 2023-07-14 17:14:02,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754. 2023-07-14 17:14:02,579 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754. 2023-07-14 17:14:02,580 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=27fc267f178f79fcc89c3ab94c985754, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:02,580 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689354842580"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354842580"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354842580"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354842580"}]},"ts":"1689354842580"} 2023-07-14 17:14:02,585 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-14 17:14:02,585 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 27fc267f178f79fcc89c3ab94c985754, server=jenkins-hbase20.apache.org,37213,1689354840194 in 192 msec 2023-07-14 17:14:02,587 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-14 17:14:02,587 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=27fc267f178f79fcc89c3ab94c985754, ASSIGN in 355 msec 2023-07-14 17:14:02,587 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:14:02,587 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354842587"}]},"ts":"1689354842587"} 2023-07-14 17:14:02,588 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-14 17:14:02,591 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:14:02,593 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 436 msec 2023-07-14 17:14:02,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-14 17:14:02,770 INFO [Listener at localhost.localdomain/39045] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-14 17:14:02,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:14:02,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-14 17:14:02,774 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:14:02,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-14 17:14:02,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-14 17:14:02,804 INFO [PEWorker-4] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=30 msec 2023-07-14 17:14:02,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-14 17:14:02,880 INFO [Listener at localhost.localdomain/39045] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-14 17:14:02,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:02,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:02,885 INFO [Listener at localhost.localdomain/39045] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-14 17:14:02,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable np1:table1 2023-07-14 17:14:02,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-14 17:14:02,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-14 17:14:02,897 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354842897"}]},"ts":"1689354842897"} 2023-07-14 17:14:02,900 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-14 17:14:02,901 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-14 17:14:02,902 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=27fc267f178f79fcc89c3ab94c985754, UNASSIGN}] 2023-07-14 17:14:02,904 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=27fc267f178f79fcc89c3ab94c985754, UNASSIGN 2023-07-14 17:14:02,905 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=27fc267f178f79fcc89c3ab94c985754, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:02,906 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689354842905"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354842905"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354842905"}]},"ts":"1689354842905"} 2023-07-14 17:14:02,908 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 27fc267f178f79fcc89c3ab94c985754, server=jenkins-hbase20.apache.org,37213,1689354840194}] 2023-07-14 17:14:02,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-14 17:14:03,062 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 27fc267f178f79fcc89c3ab94c985754 2023-07-14 17:14:03,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 27fc267f178f79fcc89c3ab94c985754, disabling compactions & flushes 2023-07-14 17:14:03,063 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754. 2023-07-14 17:14:03,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754. 2023-07-14 17:14:03,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754. after waiting 0 ms 2023-07-14 17:14:03,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754. 2023-07-14 17:14:03,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/np1/table1/27fc267f178f79fcc89c3ab94c985754/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:14:03,068 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754. 2023-07-14 17:14:03,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 27fc267f178f79fcc89c3ab94c985754: 2023-07-14 17:14:03,069 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 27fc267f178f79fcc89c3ab94c985754 2023-07-14 17:14:03,070 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=27fc267f178f79fcc89c3ab94c985754, regionState=CLOSED 2023-07-14 17:14:03,070 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689354843070"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354843070"}]},"ts":"1689354843070"} 2023-07-14 17:14:03,073 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-14 17:14:03,073 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 27fc267f178f79fcc89c3ab94c985754, server=jenkins-hbase20.apache.org,37213,1689354840194 in 163 msec 2023-07-14 17:14:03,074 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-14 17:14:03,074 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=27fc267f178f79fcc89c3ab94c985754, UNASSIGN in 171 msec 2023-07-14 17:14:03,079 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354843079"}]},"ts":"1689354843079"} 2023-07-14 17:14:03,081 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-14 17:14:03,082 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-14 17:14:03,085 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 199 msec 2023-07-14 17:14:03,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-14 17:14:03,196 INFO [Listener at localhost.localdomain/39045] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-14 17:14:03,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete np1:table1 2023-07-14 17:14:03,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-14 17:14:03,200 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-14 17:14:03,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-14 17:14:03,201 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-14 17:14:03,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:03,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 17:14:03,204 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/np1/table1/27fc267f178f79fcc89c3ab94c985754 2023-07-14 17:14:03,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-14 17:14:03,209 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/np1/table1/27fc267f178f79fcc89c3ab94c985754/fam1, FileablePath, hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/np1/table1/27fc267f178f79fcc89c3ab94c985754/recovered.edits] 2023-07-14 17:14:03,215 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/np1/table1/27fc267f178f79fcc89c3ab94c985754/recovered.edits/4.seqid to hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/archive/data/np1/table1/27fc267f178f79fcc89c3ab94c985754/recovered.edits/4.seqid 2023-07-14 17:14:03,215 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/.tmp/data/np1/table1/27fc267f178f79fcc89c3ab94c985754 2023-07-14 17:14:03,215 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-14 17:14:03,218 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-14 17:14:03,219 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-14 17:14:03,221 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-14 17:14:03,224 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-14 17:14:03,224 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-14 17:14:03,224 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354843224"}]},"ts":"9223372036854775807"} 2023-07-14 17:14:03,226 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-14 17:14:03,226 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 27fc267f178f79fcc89c3ab94c985754, NAME => 'np1:table1,,1689354842155.27fc267f178f79fcc89c3ab94c985754.', STARTKEY => '', ENDKEY => ''}] 2023-07-14 17:14:03,226 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-14 17:14:03,226 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689354843226"}]},"ts":"9223372036854775807"} 2023-07-14 17:14:03,228 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-14 17:14:03,241 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-14 17:14:03,243 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 44 msec 2023-07-14 17:14:03,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-14 17:14:03,309 INFO [Listener at localhost.localdomain/39045] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-14 17:14:03,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] master.HMaster$17(3086): Client=jenkins//148.251.75.209 delete np1 2023-07-14 17:14:03,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44713] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-14 17:14:03,328 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-14 17:14:03,333 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-14 17:14:03,338 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-14 17:14:03,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-14 17:14:03,339 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-14 17:14:03,339 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 17:14:03,340 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-14 17:14:03,343 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-14 17:14:03,345 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 27 msec 2023-07-14 17:14:03,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44713] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-14 17:14:03,440 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-14 17:14:03,440 INFO [Listener at localhost.localdomain/39045] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-14 17:14:03,440 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x615cbeb2 to 127.0.0.1:56537 2023-07-14 17:14:03,440 DEBUG [Listener at localhost.localdomain/39045] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:03,440 DEBUG [Listener at localhost.localdomain/39045] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-14 17:14:03,440 DEBUG [Listener at localhost.localdomain/39045] util.JVMClusterUtil(257): Found active master hash=1085030597, stopped=false 2023-07-14 17:14:03,440 DEBUG [Listener at localhost.localdomain/39045] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-14 17:14:03,441 DEBUG [Listener at localhost.localdomain/39045] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-14 17:14:03,441 DEBUG [Listener at localhost.localdomain/39045] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-14 17:14:03,441 INFO [Listener at localhost.localdomain/39045] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,44713,1689354839774 2023-07-14 17:14:03,441 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:03,441 INFO [Listener at localhost.localdomain/39045] procedure2.ProcedureExecutor(629): Stopping 2023-07-14 17:14:03,441 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:03,441 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:03,441 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:03,443 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:03,450 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:03,450 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:03,450 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:03,450 DEBUG [Listener at localhost.localdomain/39045] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1d35ecfe to 127.0.0.1:56537 2023-07-14 17:14:03,450 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:03,451 DEBUG [Listener at localhost.localdomain/39045] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:03,451 INFO [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1064): Closing user regions 2023-07-14 17:14:03,451 INFO [Listener at localhost.localdomain/39045] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,44435,1689354839977' ***** 2023-07-14 17:14:03,451 INFO [Listener at localhost.localdomain/39045] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 17:14:03,451 INFO [Listener at localhost.localdomain/39045] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,40287,1689354840061' ***** 2023-07-14 17:14:03,451 INFO [Listener at localhost.localdomain/39045] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 17:14:03,451 INFO [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(3305): Received CLOSE for 17de33c40105a51666cd874cf8eea882 2023-07-14 17:14:03,451 INFO [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:14:03,451 INFO [Listener at localhost.localdomain/39045] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,37213,1689354840194' ***** 2023-07-14 17:14:03,452 INFO [Listener at localhost.localdomain/39045] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 17:14:03,452 INFO [RS:2;jenkins-hbase20:37213] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:14:03,461 INFO [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:14:03,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 17de33c40105a51666cd874cf8eea882, disabling compactions & flushes 2023-07-14 17:14:03,470 INFO [RS:0;jenkins-hbase20:44435] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@700c56c4{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:03,470 INFO [RS:1;jenkins-hbase20:40287] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1823abf3{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:03,471 INFO [RS:0;jenkins-hbase20:44435] server.AbstractConnector(383): Stopped ServerConnector@13c688b1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:14:03,471 INFO [RS:1;jenkins-hbase20:40287] server.AbstractConnector(383): Stopped ServerConnector@3f25bf96{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:14:03,471 INFO [RS:0;jenkins-hbase20:44435] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:14:03,471 INFO [RS:1;jenkins-hbase20:40287] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:14:03,474 INFO [RS:1;jenkins-hbase20:40287] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@465b158d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:14:03,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. 2023-07-14 17:14:03,474 INFO [RS:1;jenkins-hbase20:40287] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@77ce8649{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/hadoop.log.dir/,STOPPED} 2023-07-14 17:14:03,474 INFO [RS:0;jenkins-hbase20:44435] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4f4c4036{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:14:03,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. 2023-07-14 17:14:03,475 INFO [RS:0;jenkins-hbase20:44435] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2514afd1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/hadoop.log.dir/,STOPPED} 2023-07-14 17:14:03,475 INFO [RS:2;jenkins-hbase20:37213] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@467aab1{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:03,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. after waiting 0 ms 2023-07-14 17:14:03,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. 2023-07-14 17:14:03,476 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 17de33c40105a51666cd874cf8eea882 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-14 17:14:03,476 INFO [RS:0;jenkins-hbase20:44435] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 17:14:03,476 INFO [RS:0;jenkins-hbase20:44435] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 17:14:03,476 INFO [RS:0;jenkins-hbase20:44435] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 17:14:03,476 INFO [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:03,476 DEBUG [RS:0;jenkins-hbase20:44435] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6599bd37 to 127.0.0.1:56537 2023-07-14 17:14:03,476 DEBUG [RS:0;jenkins-hbase20:44435] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:03,479 INFO [RS:0;jenkins-hbase20:44435] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 17:14:03,479 INFO [RS:0;jenkins-hbase20:44435] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 17:14:03,479 INFO [RS:0;jenkins-hbase20:44435] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 17:14:03,479 INFO [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-14 17:14:03,479 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 17:14:03,480 INFO [RS:2;jenkins-hbase20:37213] server.AbstractConnector(383): Stopped ServerConnector@3a4603bc{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:14:03,480 INFO [RS:2;jenkins-hbase20:37213] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:14:03,489 INFO [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-14 17:14:03,490 DEBUG [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1478): Online Regions={17de33c40105a51666cd874cf8eea882=hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882., 1588230740=hbase:meta,,1.1588230740} 2023-07-14 17:14:03,490 DEBUG [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1504): Waiting on 1588230740, 17de33c40105a51666cd874cf8eea882 2023-07-14 17:14:03,490 INFO [RS:1;jenkins-hbase20:40287] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 17:14:03,491 INFO [RS:2;jenkins-hbase20:37213] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2016266d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:14:03,492 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 17:14:03,491 INFO [RS:1;jenkins-hbase20:40287] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 17:14:03,491 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 17:14:03,492 INFO [RS:1;jenkins-hbase20:40287] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 17:14:03,492 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 17:14:03,493 INFO [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(3305): Received CLOSE for 00874062ea90f3cb7a7f2fb7ef938534 2023-07-14 17:14:03,492 INFO [RS:2;jenkins-hbase20:37213] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1de19e3f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/hadoop.log.dir/,STOPPED} 2023-07-14 17:14:03,493 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 17:14:03,493 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 17:14:03,493 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 17:14:03,493 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.90 KB heapSize=11.10 KB 2023-07-14 17:14:03,494 INFO [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(3305): Received CLOSE for 1a1eaf2f0a815c2ecbe2d392d2aa9600 2023-07-14 17:14:03,495 INFO [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:03,495 DEBUG [RS:1;jenkins-hbase20:40287] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x316f5206 to 127.0.0.1:56537 2023-07-14 17:14:03,495 DEBUG [RS:1;jenkins-hbase20:40287] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:03,495 INFO [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-14 17:14:03,495 DEBUG [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(1478): Online Regions={00874062ea90f3cb7a7f2fb7ef938534=hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534., 1a1eaf2f0a815c2ecbe2d392d2aa9600=hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600.} 2023-07-14 17:14:03,495 DEBUG [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(1504): Waiting on 00874062ea90f3cb7a7f2fb7ef938534, 1a1eaf2f0a815c2ecbe2d392d2aa9600 2023-07-14 17:14:03,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 00874062ea90f3cb7a7f2fb7ef938534, disabling compactions & flushes 2023-07-14 17:14:03,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. 2023-07-14 17:14:03,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. 2023-07-14 17:14:03,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. after waiting 0 ms 2023-07-14 17:14:03,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. 2023-07-14 17:14:03,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 00874062ea90f3cb7a7f2fb7ef938534 1/1 column families, dataSize=642 B heapSize=1.10 KB 2023-07-14 17:14:03,499 INFO [RS:2;jenkins-hbase20:37213] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 17:14:03,500 INFO [RS:2;jenkins-hbase20:37213] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 17:14:03,500 INFO [RS:2;jenkins-hbase20:37213] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 17:14:03,500 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 17:14:03,500 INFO [RS:2;jenkins-hbase20:37213] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:03,501 DEBUG [RS:2;jenkins-hbase20:37213] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x43a275c2 to 127.0.0.1:56537 2023-07-14 17:14:03,501 DEBUG [RS:2;jenkins-hbase20:37213] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:03,501 INFO [RS:2;jenkins-hbase20:37213] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,37213,1689354840194; all regions closed. 2023-07-14 17:14:03,502 DEBUG [RS:2;jenkins-hbase20:37213] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-14 17:14:03,535 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-14 17:14:03,535 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-14 17:14:03,538 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:03,541 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:03,545 DEBUG [RS:2;jenkins-hbase20:37213] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/oldWALs 2023-07-14 17:14:03,545 INFO [RS:2;jenkins-hbase20:37213] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C37213%2C1689354840194:(num 1689354840670) 2023-07-14 17:14:03,545 DEBUG [RS:2;jenkins-hbase20:37213] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:03,545 INFO [RS:2;jenkins-hbase20:37213] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:03,551 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:03,553 INFO [RS:2;jenkins-hbase20:37213] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-14 17:14:03,553 INFO [RS:2;jenkins-hbase20:37213] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 17:14:03,553 INFO [RS:2;jenkins-hbase20:37213] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 17:14:03,553 INFO [RS:2;jenkins-hbase20:37213] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 17:14:03,554 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:14:03,554 INFO [RS:2;jenkins-hbase20:37213] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:37213 2023-07-14 17:14:03,559 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:03,559 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:03,559 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:03,559 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:03,559 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:03,560 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,37213,1689354840194 2023-07-14 17:14:03,560 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:03,560 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,37213,1689354840194] 2023-07-14 17:14:03,560 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,37213,1689354840194; numProcessing=1 2023-07-14 17:14:03,561 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,37213,1689354840194 already deleted, retry=false 2023-07-14 17:14:03,561 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,37213,1689354840194 expired; onlineServers=2 2023-07-14 17:14:03,568 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=642 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/rsgroup/00874062ea90f3cb7a7f2fb7ef938534/.tmp/m/001fdef1df2e49eb87b13bd3e3e840b4 2023-07-14 17:14:03,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/namespace/17de33c40105a51666cd874cf8eea882/.tmp/info/ee864dc5f27a4427b947bfecdb16d6b8 2023-07-14 17:14:03,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ee864dc5f27a4427b947bfecdb16d6b8 2023-07-14 17:14:03,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/rsgroup/00874062ea90f3cb7a7f2fb7ef938534/.tmp/m/001fdef1df2e49eb87b13bd3e3e840b4 as hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/rsgroup/00874062ea90f3cb7a7f2fb7ef938534/m/001fdef1df2e49eb87b13bd3e3e840b4 2023-07-14 17:14:03,583 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/namespace/17de33c40105a51666cd874cf8eea882/.tmp/info/ee864dc5f27a4427b947bfecdb16d6b8 as hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/namespace/17de33c40105a51666cd874cf8eea882/info/ee864dc5f27a4427b947bfecdb16d6b8 2023-07-14 17:14:03,594 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ee864dc5f27a4427b947bfecdb16d6b8 2023-07-14 17:14:03,594 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/namespace/17de33c40105a51666cd874cf8eea882/info/ee864dc5f27a4427b947bfecdb16d6b8, entries=3, sequenceid=8, filesize=5.0 K 2023-07-14 17:14:03,595 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/rsgroup/00874062ea90f3cb7a7f2fb7ef938534/m/001fdef1df2e49eb87b13bd3e3e840b4, entries=1, sequenceid=7, filesize=4.9 K 2023-07-14 17:14:03,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 17de33c40105a51666cd874cf8eea882 in 120ms, sequenceid=8, compaction requested=false 2023-07-14 17:14:03,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~642 B/642, heapSize ~1.09 KB/1112, currentSize=0 B/0 for 00874062ea90f3cb7a7f2fb7ef938534 in 98ms, sequenceid=7, compaction requested=false 2023-07-14 17:14:03,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-14 17:14:03,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-14 17:14:03,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/rsgroup/00874062ea90f3cb7a7f2fb7ef938534/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-14 17:14:03,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/namespace/17de33c40105a51666cd874cf8eea882/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-14 17:14:03,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 17:14:03,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. 2023-07-14 17:14:03,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. 2023-07-14 17:14:03,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 00874062ea90f3cb7a7f2fb7ef938534: 2023-07-14 17:14:03,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689354841033.00874062ea90f3cb7a7f2fb7ef938534. 2023-07-14 17:14:03,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1a1eaf2f0a815c2ecbe2d392d2aa9600, disabling compactions & flushes 2023-07-14 17:14:03,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. 2023-07-14 17:14:03,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. 2023-07-14 17:14:03,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 17de33c40105a51666cd874cf8eea882: 2023-07-14 17:14:03,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. after waiting 0 ms 2023-07-14 17:14:03,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. 2023-07-14 17:14:03,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689354840940.17de33c40105a51666cd874cf8eea882. 2023-07-14 17:14:03,622 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-14 17:14:03,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/quota/1a1eaf2f0a815c2ecbe2d392d2aa9600/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:14:03,622 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-14 17:14:03,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. 2023-07-14 17:14:03,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1a1eaf2f0a815c2ecbe2d392d2aa9600: 2023-07-14 17:14:03,623 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689354841879.1a1eaf2f0a815c2ecbe2d392d2aa9600. 2023-07-14 17:14:03,691 DEBUG [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-14 17:14:03,695 INFO [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,40287,1689354840061; all regions closed. 2023-07-14 17:14:03,695 DEBUG [RS:1;jenkins-hbase20:40287] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-14 17:14:03,708 DEBUG [RS:1;jenkins-hbase20:40287] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/oldWALs 2023-07-14 17:14:03,708 INFO [RS:1;jenkins-hbase20:40287] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C40287%2C1689354840061:(num 1689354840678) 2023-07-14 17:14:03,708 DEBUG [RS:1;jenkins-hbase20:40287] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:03,708 INFO [RS:1;jenkins-hbase20:40287] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:03,708 INFO [RS:1;jenkins-hbase20:40287] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-14 17:14:03,709 INFO [RS:1;jenkins-hbase20:40287] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 17:14:03,709 INFO [RS:1;jenkins-hbase20:40287] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 17:14:03,709 INFO [RS:1;jenkins-hbase20:40287] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 17:14:03,709 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:14:03,710 INFO [RS:1;jenkins-hbase20:40287] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:40287 2023-07-14 17:14:03,714 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:03,714 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:03,714 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,40287,1689354840061 2023-07-14 17:14:03,715 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,40287,1689354840061] 2023-07-14 17:14:03,715 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,40287,1689354840061; numProcessing=2 2023-07-14 17:14:03,715 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,40287,1689354840061 already deleted, retry=false 2023-07-14 17:14:03,716 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,40287,1689354840061 expired; onlineServers=1 2023-07-14 17:14:03,891 DEBUG [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-14 17:14:03,969 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.27 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/.tmp/info/c235a975bb4c4253b7a64d3a7ea502ec 2023-07-14 17:14:03,981 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c235a975bb4c4253b7a64d3a7ea502ec 2023-07-14 17:14:04,016 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/.tmp/rep_barrier/4e2dbc31445b445a89ed15fc65163231 2023-07-14 17:14:04,023 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4e2dbc31445b445a89ed15fc65163231 2023-07-14 17:14:04,043 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:04,043 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:40287-0x1008c79a3240002, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:04,043 INFO [RS:1;jenkins-hbase20:40287] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,40287,1689354840061; zookeeper connection closed. 2023-07-14 17:14:04,046 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5ffeed7f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5ffeed7f 2023-07-14 17:14:04,070 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/.tmp/table/0f03d287e93946aa92df56dd41be185c 2023-07-14 17:14:04,082 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0f03d287e93946aa92df56dd41be185c 2023-07-14 17:14:04,084 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/.tmp/info/c235a975bb4c4253b7a64d3a7ea502ec as hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/info/c235a975bb4c4253b7a64d3a7ea502ec 2023-07-14 17:14:04,091 DEBUG [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-14 17:14:04,092 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c235a975bb4c4253b7a64d3a7ea502ec 2023-07-14 17:14:04,092 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/info/c235a975bb4c4253b7a64d3a7ea502ec, entries=32, sequenceid=31, filesize=8.5 K 2023-07-14 17:14:04,093 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/.tmp/rep_barrier/4e2dbc31445b445a89ed15fc65163231 as hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/rep_barrier/4e2dbc31445b445a89ed15fc65163231 2023-07-14 17:14:04,125 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4e2dbc31445b445a89ed15fc65163231 2023-07-14 17:14:04,126 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/rep_barrier/4e2dbc31445b445a89ed15fc65163231, entries=1, sequenceid=31, filesize=4.9 K 2023-07-14 17:14:04,135 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/.tmp/table/0f03d287e93946aa92df56dd41be185c as hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/table/0f03d287e93946aa92df56dd41be185c 2023-07-14 17:14:04,143 INFO [RS:2;jenkins-hbase20:37213] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,37213,1689354840194; zookeeper connection closed. 2023-07-14 17:14:04,143 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:04,143 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:37213-0x1008c79a3240003, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:04,159 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@35aa4df0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@35aa4df0 2023-07-14 17:14:04,166 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0f03d287e93946aa92df56dd41be185c 2023-07-14 17:14:04,167 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/table/0f03d287e93946aa92df56dd41be185c, entries=8, sequenceid=31, filesize=5.2 K 2023-07-14 17:14:04,168 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.90 KB/6045, heapSize ~11.05 KB/11320, currentSize=0 B/0 for 1588230740 in 675ms, sequenceid=31, compaction requested=false 2023-07-14 17:14:04,168 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-14 17:14:04,203 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-14 17:14:04,203 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 17:14:04,203 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 17:14:04,203 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 17:14:04,204 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-14 17:14:04,291 INFO [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44435,1689354839977; all regions closed. 2023-07-14 17:14:04,291 DEBUG [RS:0;jenkins-hbase20:44435] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-14 17:14:04,310 DEBUG [RS:0;jenkins-hbase20:44435] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/oldWALs 2023-07-14 17:14:04,311 INFO [RS:0;jenkins-hbase20:44435] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C44435%2C1689354839977.meta:.meta(num 1689354840875) 2023-07-14 17:14:04,319 DEBUG [RS:0;jenkins-hbase20:44435] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/oldWALs 2023-07-14 17:14:04,319 INFO [RS:0;jenkins-hbase20:44435] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C44435%2C1689354839977:(num 1689354840670) 2023-07-14 17:14:04,319 DEBUG [RS:0;jenkins-hbase20:44435] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:04,319 INFO [RS:0;jenkins-hbase20:44435] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:04,320 INFO [RS:0;jenkins-hbase20:44435] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-14 17:14:04,320 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:14:04,321 INFO [RS:0;jenkins-hbase20:44435] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44435 2023-07-14 17:14:04,325 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44435,1689354839977 2023-07-14 17:14:04,325 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:04,326 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,44435,1689354839977] 2023-07-14 17:14:04,326 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,44435,1689354839977; numProcessing=3 2023-07-14 17:14:04,328 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,44435,1689354839977 already deleted, retry=false 2023-07-14 17:14:04,328 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,44435,1689354839977 expired; onlineServers=0 2023-07-14 17:14:04,328 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,44713,1689354839774' ***** 2023-07-14 17:14:04,328 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-14 17:14:04,329 DEBUG [M:0;jenkins-hbase20:44713] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4493dc08, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:14:04,329 INFO [M:0;jenkins-hbase20:44713] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:14:04,333 INFO [M:0;jenkins-hbase20:44713] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@125e4d20{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-14 17:14:04,334 INFO [M:0;jenkins-hbase20:44713] server.AbstractConnector(383): Stopped ServerConnector@6a4dfbdb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:14:04,334 INFO [M:0;jenkins-hbase20:44713] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:14:04,334 INFO [M:0;jenkins-hbase20:44713] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d0b0b53{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:14:04,334 INFO [M:0;jenkins-hbase20:44713] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@ac5238b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/hadoop.log.dir/,STOPPED} 2023-07-14 17:14:04,335 INFO [M:0;jenkins-hbase20:44713] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44713,1689354839774 2023-07-14 17:14:04,335 INFO [M:0;jenkins-hbase20:44713] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44713,1689354839774; all regions closed. 2023-07-14 17:14:04,335 DEBUG [M:0;jenkins-hbase20:44713] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:04,335 INFO [M:0;jenkins-hbase20:44713] master.HMaster(1491): Stopping master jetty server 2023-07-14 17:14:04,335 INFO [M:0;jenkins-hbase20:44713] server.AbstractConnector(383): Stopped ServerConnector@608691bf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:14:04,336 DEBUG [M:0;jenkins-hbase20:44713] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-14 17:14:04,336 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-14 17:14:04,336 DEBUG [M:0;jenkins-hbase20:44713] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-14 17:14:04,336 INFO [M:0;jenkins-hbase20:44713] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-14 17:14:04,336 INFO [M:0;jenkins-hbase20:44713] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-14 17:14:04,336 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689354840440] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689354840440,5,FailOnTimeoutGroup] 2023-07-14 17:14:04,336 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689354840439] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689354840439,5,FailOnTimeoutGroup] 2023-07-14 17:14:04,338 INFO [M:0;jenkins-hbase20:44713] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-14 17:14:04,338 DEBUG [M:0;jenkins-hbase20:44713] master.HMaster(1512): Stopping service threads 2023-07-14 17:14:04,338 INFO [M:0;jenkins-hbase20:44713] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-14 17:14:04,338 ERROR [M:0;jenkins-hbase20:44713] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-14 17:14:04,339 INFO [M:0;jenkins-hbase20:44713] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-14 17:14:04,339 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-14 17:14:04,427 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:04,427 INFO [RS:0;jenkins-hbase20:44435] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44435,1689354839977; zookeeper connection closed. 2023-07-14 17:14:04,427 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44435-0x1008c79a3240001, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:04,427 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1601764f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1601764f 2023-07-14 17:14:04,427 INFO [Listener at localhost.localdomain/39045] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-14 17:14:04,428 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-14 17:14:04,428 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:04,428 INFO [M:0;jenkins-hbase20:44713] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-14 17:14:04,429 INFO [M:0;jenkins-hbase20:44713] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-14 17:14:04,429 DEBUG [M:0;jenkins-hbase20:44713] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-14 17:14:04,429 INFO [M:0;jenkins-hbase20:44713] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:04,429 DEBUG [M:0;jenkins-hbase20:44713] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:04,429 DEBUG [M:0;jenkins-hbase20:44713] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-14 17:14:04,429 DEBUG [M:0;jenkins-hbase20:44713] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:04,430 INFO [M:0;jenkins-hbase20:44713] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.05 KB heapSize=109.20 KB 2023-07-14 17:14:04,430 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/master already deleted, retry=false 2023-07-14 17:14:04,430 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:14:04,430 DEBUG [RegionServerTracker-0] master.ActiveMasterManager(335): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Failed delete of our master address node; KeeperErrorCode = NoNode for /hbase/master 2023-07-14 17:14:04,461 INFO [M:0;jenkins-hbase20:44713] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.05 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1a23358cabcf46b498ff901dac489174 2023-07-14 17:14:04,467 DEBUG [M:0;jenkins-hbase20:44713] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1a23358cabcf46b498ff901dac489174 as hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1a23358cabcf46b498ff901dac489174 2023-07-14 17:14:04,475 INFO [M:0;jenkins-hbase20:44713] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38043/user/jenkins/test-data/4d9d0011-0cfe-4148-f925-46a98dd7eb65/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1a23358cabcf46b498ff901dac489174, entries=24, sequenceid=194, filesize=12.4 K 2023-07-14 17:14:04,476 INFO [M:0;jenkins-hbase20:44713] regionserver.HRegion(2948): Finished flush of dataSize ~93.05 KB/95284, heapSize ~109.19 KB/111808, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 47ms, sequenceid=194, compaction requested=false 2023-07-14 17:14:04,478 INFO [M:0;jenkins-hbase20:44713] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:04,478 DEBUG [M:0;jenkins-hbase20:44713] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 17:14:04,482 INFO [M:0;jenkins-hbase20:44713] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-14 17:14:04,482 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:14:04,483 INFO [M:0;jenkins-hbase20:44713] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44713 2023-07-14 17:14:04,484 DEBUG [M:0;jenkins-hbase20:44713] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,44713,1689354839774 already deleted, retry=false 2023-07-14 17:14:04,585 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:04,585 INFO [M:0;jenkins-hbase20:44713] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44713,1689354839774; zookeeper connection closed. 2023-07-14 17:14:04,585 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): master:44713-0x1008c79a3240000, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:04,586 WARN [Listener at localhost.localdomain/39045] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 17:14:04,592 INFO [Listener at localhost.localdomain/39045] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 17:14:04,696 WARN [BP-1553773554-148.251.75.209-1689354838762 heartbeating to localhost.localdomain/127.0.0.1:38043] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 17:14:04,696 WARN [BP-1553773554-148.251.75.209-1689354838762 heartbeating to localhost.localdomain/127.0.0.1:38043] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1553773554-148.251.75.209-1689354838762 (Datanode Uuid 8b15ed60-a34d-4cac-a2f4-411ba8dbf8b4) service to localhost.localdomain/127.0.0.1:38043 2023-07-14 17:14:04,698 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/cluster_ff025c63-2f88-cccf-3ed2-27add37df3a6/dfs/data/data5/current/BP-1553773554-148.251.75.209-1689354838762] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:14:04,698 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/cluster_ff025c63-2f88-cccf-3ed2-27add37df3a6/dfs/data/data6/current/BP-1553773554-148.251.75.209-1689354838762] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:14:04,702 WARN [Listener at localhost.localdomain/39045] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 17:14:04,750 INFO [Listener at localhost.localdomain/39045] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 17:14:04,765 WARN [BP-1553773554-148.251.75.209-1689354838762 heartbeating to localhost.localdomain/127.0.0.1:38043] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 17:14:04,765 WARN [BP-1553773554-148.251.75.209-1689354838762 heartbeating to localhost.localdomain/127.0.0.1:38043] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1553773554-148.251.75.209-1689354838762 (Datanode Uuid febb32fe-2bed-4bfa-a5a3-3c97b1c3d6b5) service to localhost.localdomain/127.0.0.1:38043 2023-07-14 17:14:04,767 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/cluster_ff025c63-2f88-cccf-3ed2-27add37df3a6/dfs/data/data4/current/BP-1553773554-148.251.75.209-1689354838762] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:14:04,767 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/cluster_ff025c63-2f88-cccf-3ed2-27add37df3a6/dfs/data/data3/current/BP-1553773554-148.251.75.209-1689354838762] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:14:04,769 WARN [Listener at localhost.localdomain/39045] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 17:14:04,794 INFO [Listener at localhost.localdomain/39045] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 17:14:04,897 WARN [BP-1553773554-148.251.75.209-1689354838762 heartbeating to localhost.localdomain/127.0.0.1:38043] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 17:14:04,897 WARN [BP-1553773554-148.251.75.209-1689354838762 heartbeating to localhost.localdomain/127.0.0.1:38043] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1553773554-148.251.75.209-1689354838762 (Datanode Uuid 01cf9cdd-9a96-4337-a484-1414d8b27402) service to localhost.localdomain/127.0.0.1:38043 2023-07-14 17:14:04,898 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/cluster_ff025c63-2f88-cccf-3ed2-27add37df3a6/dfs/data/data1/current/BP-1553773554-148.251.75.209-1689354838762] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:14:04,898 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/cluster_ff025c63-2f88-cccf-3ed2-27add37df3a6/dfs/data/data2/current/BP-1553773554-148.251.75.209-1689354838762] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:14:04,912 INFO [Listener at localhost.localdomain/39045] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-14 17:14:05,027 INFO [Listener at localhost.localdomain/39045] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-14 17:14:05,063 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-14 17:14:05,063 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-14 17:14:05,063 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/hadoop.log.dir so I do NOT create it in target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e 2023-07-14 17:14:05,064 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/91794d29-25cb-1699-5e7a-d6c0c0735cf7/hadoop.tmp.dir so I do NOT create it in target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e 2023-07-14 17:14:05,064 INFO [Listener at localhost.localdomain/39045] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a, deleteOnExit=true 2023-07-14 17:14:05,064 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-14 17:14:05,064 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/test.cache.data in system properties and HBase conf 2023-07-14 17:14:05,064 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/hadoop.tmp.dir in system properties and HBase conf 2023-07-14 17:14:05,064 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/hadoop.log.dir in system properties and HBase conf 2023-07-14 17:14:05,065 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-14 17:14:05,065 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-14 17:14:05,065 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-14 17:14:05,065 DEBUG [Listener at localhost.localdomain/39045] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-14 17:14:05,065 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-14 17:14:05,065 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-14 17:14:05,066 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-14 17:14:05,066 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-14 17:14:05,066 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-14 17:14:05,066 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-14 17:14:05,067 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-14 17:14:05,067 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-14 17:14:05,067 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-14 17:14:05,067 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/nfs.dump.dir in system properties and HBase conf 2023-07-14 17:14:05,067 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/java.io.tmpdir in system properties and HBase conf 2023-07-14 17:14:05,067 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-14 17:14:05,067 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-14 17:14:05,068 INFO [Listener at localhost.localdomain/39045] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-14 17:14:05,071 WARN [Listener at localhost.localdomain/39045] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-14 17:14:05,071 WARN [Listener at localhost.localdomain/39045] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-14 17:14:05,102 WARN [Listener at localhost.localdomain/39045] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 17:14:05,106 INFO [Listener at localhost.localdomain/39045] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 17:14:05,114 INFO [Listener at localhost.localdomain/39045] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/java.io.tmpdir/Jetty_localhost_localdomain_33073_hdfs____g0n91i/webapp 2023-07-14 17:14:05,126 DEBUG [Listener at localhost.localdomain/39045-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1008c79a324000a, quorum=127.0.0.1:56537, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-14 17:14:05,126 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1008c79a324000a, quorum=127.0.0.1:56537, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-14 17:14:05,204 INFO [Listener at localhost.localdomain/39045] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:33073 2023-07-14 17:14:05,206 WARN [Listener at localhost.localdomain/39045] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-14 17:14:05,206 WARN [Listener at localhost.localdomain/39045] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-14 17:14:05,232 WARN [Listener at localhost.localdomain/43505] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 17:14:05,246 WARN [Listener at localhost.localdomain/43505] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 17:14:05,248 WARN [Listener at localhost.localdomain/43505] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 17:14:05,249 INFO [Listener at localhost.localdomain/43505] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 17:14:05,257 INFO [Listener at localhost.localdomain/43505] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/java.io.tmpdir/Jetty_localhost_34929_datanode____.l2waxm/webapp 2023-07-14 17:14:05,330 INFO [Listener at localhost.localdomain/43505] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34929 2023-07-14 17:14:05,336 WARN [Listener at localhost.localdomain/41897] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 17:14:05,350 WARN [Listener at localhost.localdomain/41897] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 17:14:05,352 WARN [Listener at localhost.localdomain/41897] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 17:14:05,353 INFO [Listener at localhost.localdomain/41897] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 17:14:05,356 INFO [Listener at localhost.localdomain/41897] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/java.io.tmpdir/Jetty_localhost_33755_datanode____2ut34u/webapp 2023-07-14 17:14:05,398 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x74d99325bcba5dd2: Processing first storage report for DS-4215712d-f263-40b9-9298-85201cfd7727 from datanode 43422b35-d4d0-4ea5-82ab-05b567aa19a2 2023-07-14 17:14:05,399 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x74d99325bcba5dd2: from storage DS-4215712d-f263-40b9-9298-85201cfd7727 node DatanodeRegistration(127.0.0.1:45065, datanodeUuid=43422b35-d4d0-4ea5-82ab-05b567aa19a2, infoPort=45783, infoSecurePort=0, ipcPort=41897, storageInfo=lv=-57;cid=testClusterID;nsid=791696055;c=1689354845074), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:14:05,399 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x74d99325bcba5dd2: Processing first storage report for DS-3e14f94e-fe8a-4ad7-a0e7-0598625eda94 from datanode 43422b35-d4d0-4ea5-82ab-05b567aa19a2 2023-07-14 17:14:05,399 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x74d99325bcba5dd2: from storage DS-3e14f94e-fe8a-4ad7-a0e7-0598625eda94 node DatanodeRegistration(127.0.0.1:45065, datanodeUuid=43422b35-d4d0-4ea5-82ab-05b567aa19a2, infoPort=45783, infoSecurePort=0, ipcPort=41897, storageInfo=lv=-57;cid=testClusterID;nsid=791696055;c=1689354845074), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:14:05,441 INFO [Listener at localhost.localdomain/41897] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33755 2023-07-14 17:14:05,447 WARN [Listener at localhost.localdomain/40387] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 17:14:05,461 WARN [Listener at localhost.localdomain/40387] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 17:14:05,463 WARN [Listener at localhost.localdomain/40387] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 17:14:05,464 INFO [Listener at localhost.localdomain/40387] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 17:14:05,470 INFO [Listener at localhost.localdomain/40387] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/java.io.tmpdir/Jetty_localhost_39817_datanode____v893z/webapp 2023-07-14 17:14:05,514 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaff8bc655fe8bfe8: Processing first storage report for DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d from datanode 045f435a-ce6b-4a2e-88dc-9f4c7a5ef1d0 2023-07-14 17:14:05,514 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaff8bc655fe8bfe8: from storage DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d node DatanodeRegistration(127.0.0.1:45875, datanodeUuid=045f435a-ce6b-4a2e-88dc-9f4c7a5ef1d0, infoPort=42555, infoSecurePort=0, ipcPort=40387, storageInfo=lv=-57;cid=testClusterID;nsid=791696055;c=1689354845074), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:14:05,514 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaff8bc655fe8bfe8: Processing first storage report for DS-50cb83fb-8a58-4325-9209-c34a29591a0c from datanode 045f435a-ce6b-4a2e-88dc-9f4c7a5ef1d0 2023-07-14 17:14:05,514 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaff8bc655fe8bfe8: from storage DS-50cb83fb-8a58-4325-9209-c34a29591a0c node DatanodeRegistration(127.0.0.1:45875, datanodeUuid=045f435a-ce6b-4a2e-88dc-9f4c7a5ef1d0, infoPort=42555, infoSecurePort=0, ipcPort=40387, storageInfo=lv=-57;cid=testClusterID;nsid=791696055;c=1689354845074), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:14:05,549 INFO [Listener at localhost.localdomain/40387] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39817 2023-07-14 17:14:05,556 WARN [Listener at localhost.localdomain/41959] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 17:14:05,627 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8d92d8847dfcce5c: Processing first storage report for DS-36b58b11-1385-4fc8-b76b-4adb69764848 from datanode 9e90f798-876f-4c34-a2a3-0867cd04ad27 2023-07-14 17:14:05,627 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8d92d8847dfcce5c: from storage DS-36b58b11-1385-4fc8-b76b-4adb69764848 node DatanodeRegistration(127.0.0.1:43591, datanodeUuid=9e90f798-876f-4c34-a2a3-0867cd04ad27, infoPort=33913, infoSecurePort=0, ipcPort=41959, storageInfo=lv=-57;cid=testClusterID;nsid=791696055;c=1689354845074), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:14:05,627 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8d92d8847dfcce5c: Processing first storage report for DS-2fee040b-c5dd-46ab-a9ef-8cb21bcfa73f from datanode 9e90f798-876f-4c34-a2a3-0867cd04ad27 2023-07-14 17:14:05,627 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8d92d8847dfcce5c: from storage DS-2fee040b-c5dd-46ab-a9ef-8cb21bcfa73f node DatanodeRegistration(127.0.0.1:43591, datanodeUuid=9e90f798-876f-4c34-a2a3-0867cd04ad27, infoPort=33913, infoSecurePort=0, ipcPort=41959, storageInfo=lv=-57;cid=testClusterID;nsid=791696055;c=1689354845074), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 17:14:05,664 DEBUG [Listener at localhost.localdomain/41959] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e 2023-07-14 17:14:05,668 INFO [Listener at localhost.localdomain/41959] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/zookeeper_0, clientPort=53758, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-14 17:14:05,670 INFO [Listener at localhost.localdomain/41959] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53758 2023-07-14 17:14:05,670 INFO [Listener at localhost.localdomain/41959] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:05,671 INFO [Listener at localhost.localdomain/41959] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:05,687 INFO [Listener at localhost.localdomain/41959] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644 with version=8 2023-07-14 17:14:05,687 INFO [Listener at localhost.localdomain/41959] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:37685/user/jenkins/test-data/6863dac1-2d24-cc4e-3b03-f980de3dc12a/hbase-staging 2023-07-14 17:14:05,688 DEBUG [Listener at localhost.localdomain/41959] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-14 17:14:05,688 DEBUG [Listener at localhost.localdomain/41959] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-14 17:14:05,688 DEBUG [Listener at localhost.localdomain/41959] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-14 17:14:05,688 DEBUG [Listener at localhost.localdomain/41959] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-14 17:14:05,689 INFO [Listener at localhost.localdomain/41959] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:14:05,689 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:05,689 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:05,689 INFO [Listener at localhost.localdomain/41959] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:14:05,689 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:05,689 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:14:05,689 INFO [Listener at localhost.localdomain/41959] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:14:05,691 INFO [Listener at localhost.localdomain/41959] ipc.NettyRpcServer(120): Bind to /148.251.75.209:39335 2023-07-14 17:14:05,691 INFO [Listener at localhost.localdomain/41959] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:05,692 INFO [Listener at localhost.localdomain/41959] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:05,693 INFO [Listener at localhost.localdomain/41959] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39335 connecting to ZooKeeper ensemble=127.0.0.1:53758 2023-07-14 17:14:05,698 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:393350x0, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:14:05,699 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39335-0x1008c79ba620000 connected 2023-07-14 17:14:05,709 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(164): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:14:05,709 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(164): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:05,710 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(164): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:14:05,711 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39335 2023-07-14 17:14:05,712 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39335 2023-07-14 17:14:05,712 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39335 2023-07-14 17:14:05,713 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39335 2023-07-14 17:14:05,713 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39335 2023-07-14 17:14:05,716 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:14:05,716 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:14:05,716 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:14:05,717 INFO [Listener at localhost.localdomain/41959] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-14 17:14:05,717 INFO [Listener at localhost.localdomain/41959] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:14:05,717 INFO [Listener at localhost.localdomain/41959] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:14:05,717 INFO [Listener at localhost.localdomain/41959] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:14:05,718 INFO [Listener at localhost.localdomain/41959] http.HttpServer(1146): Jetty bound to port 43807 2023-07-14 17:14:05,718 INFO [Listener at localhost.localdomain/41959] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:14:05,719 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:05,719 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6103c650{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:14:05,719 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:05,719 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2bcb7f4e{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:14:05,724 INFO [Listener at localhost.localdomain/41959] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:14:05,725 INFO [Listener at localhost.localdomain/41959] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:14:05,725 INFO [Listener at localhost.localdomain/41959] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:14:05,726 INFO [Listener at localhost.localdomain/41959] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 17:14:05,726 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:05,727 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@372bd8f5{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-14 17:14:05,728 INFO [Listener at localhost.localdomain/41959] server.AbstractConnector(333): Started ServerConnector@392abe7f{HTTP/1.1, (http/1.1)}{0.0.0.0:43807} 2023-07-14 17:14:05,728 INFO [Listener at localhost.localdomain/41959] server.Server(415): Started @44378ms 2023-07-14 17:14:05,729 INFO [Listener at localhost.localdomain/41959] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644, hbase.cluster.distributed=false 2023-07-14 17:14:05,740 INFO [Listener at localhost.localdomain/41959] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:14:05,740 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:05,740 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:05,740 INFO [Listener at localhost.localdomain/41959] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:14:05,741 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:05,741 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:14:05,741 INFO [Listener at localhost.localdomain/41959] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:14:05,741 INFO [Listener at localhost.localdomain/41959] ipc.NettyRpcServer(120): Bind to /148.251.75.209:45741 2023-07-14 17:14:05,742 INFO [Listener at localhost.localdomain/41959] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 17:14:05,743 DEBUG [Listener at localhost.localdomain/41959] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 17:14:05,743 INFO [Listener at localhost.localdomain/41959] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:05,744 INFO [Listener at localhost.localdomain/41959] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:05,745 INFO [Listener at localhost.localdomain/41959] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45741 connecting to ZooKeeper ensemble=127.0.0.1:53758 2023-07-14 17:14:05,749 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:457410x0, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:14:05,750 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(164): regionserver:457410x0, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:14:05,751 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45741-0x1008c79ba620001 connected 2023-07-14 17:14:05,752 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(164): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:05,752 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(164): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:14:05,753 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45741 2023-07-14 17:14:05,753 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45741 2023-07-14 17:14:05,753 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45741 2023-07-14 17:14:05,754 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45741 2023-07-14 17:14:05,754 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45741 2023-07-14 17:14:05,756 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:14:05,757 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:14:05,757 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:14:05,757 INFO [Listener at localhost.localdomain/41959] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 17:14:05,757 INFO [Listener at localhost.localdomain/41959] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:14:05,758 INFO [Listener at localhost.localdomain/41959] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:14:05,758 INFO [Listener at localhost.localdomain/41959] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:14:05,758 INFO [Listener at localhost.localdomain/41959] http.HttpServer(1146): Jetty bound to port 34805 2023-07-14 17:14:05,758 INFO [Listener at localhost.localdomain/41959] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:14:05,760 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:05,760 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@696a04fe{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:14:05,760 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:05,761 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@31ed1238{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:14:05,768 INFO [Listener at localhost.localdomain/41959] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:14:05,769 INFO [Listener at localhost.localdomain/41959] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:14:05,769 INFO [Listener at localhost.localdomain/41959] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:14:05,769 INFO [Listener at localhost.localdomain/41959] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 17:14:05,770 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:05,771 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3a8b1fd4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:05,772 INFO [Listener at localhost.localdomain/41959] server.AbstractConnector(333): Started ServerConnector@39269edd{HTTP/1.1, (http/1.1)}{0.0.0.0:34805} 2023-07-14 17:14:05,772 INFO [Listener at localhost.localdomain/41959] server.Server(415): Started @44421ms 2023-07-14 17:14:05,783 INFO [Listener at localhost.localdomain/41959] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:14:05,783 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:05,784 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:05,784 INFO [Listener at localhost.localdomain/41959] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:14:05,784 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:05,784 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:14:05,784 INFO [Listener at localhost.localdomain/41959] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:14:05,784 INFO [Listener at localhost.localdomain/41959] ipc.NettyRpcServer(120): Bind to /148.251.75.209:43799 2023-07-14 17:14:05,785 INFO [Listener at localhost.localdomain/41959] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 17:14:05,786 DEBUG [Listener at localhost.localdomain/41959] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 17:14:05,786 INFO [Listener at localhost.localdomain/41959] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:05,787 INFO [Listener at localhost.localdomain/41959] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:05,787 INFO [Listener at localhost.localdomain/41959] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43799 connecting to ZooKeeper ensemble=127.0.0.1:53758 2023-07-14 17:14:05,790 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:437990x0, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:14:05,792 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43799-0x1008c79ba620002 connected 2023-07-14 17:14:05,792 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(164): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:14:05,793 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(164): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:05,793 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(164): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:14:05,793 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43799 2023-07-14 17:14:05,794 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43799 2023-07-14 17:14:05,794 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43799 2023-07-14 17:14:05,794 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43799 2023-07-14 17:14:05,794 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43799 2023-07-14 17:14:05,796 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:14:05,796 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:14:05,796 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:14:05,796 INFO [Listener at localhost.localdomain/41959] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 17:14:05,796 INFO [Listener at localhost.localdomain/41959] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:14:05,797 INFO [Listener at localhost.localdomain/41959] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:14:05,797 INFO [Listener at localhost.localdomain/41959] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:14:05,797 INFO [Listener at localhost.localdomain/41959] http.HttpServer(1146): Jetty bound to port 37211 2023-07-14 17:14:05,797 INFO [Listener at localhost.localdomain/41959] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:14:05,799 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:05,799 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@46751bf1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:14:05,799 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:05,800 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5428a3f0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:14:05,807 INFO [Listener at localhost.localdomain/41959] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:14:05,807 INFO [Listener at localhost.localdomain/41959] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:14:05,808 INFO [Listener at localhost.localdomain/41959] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:14:05,808 INFO [Listener at localhost.localdomain/41959] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-14 17:14:05,809 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:05,810 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2006aff9{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:05,812 INFO [Listener at localhost.localdomain/41959] server.AbstractConnector(333): Started ServerConnector@4d9fdcd9{HTTP/1.1, (http/1.1)}{0.0.0.0:37211} 2023-07-14 17:14:05,812 INFO [Listener at localhost.localdomain/41959] server.Server(415): Started @44461ms 2023-07-14 17:14:05,821 INFO [Listener at localhost.localdomain/41959] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:14:05,822 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:05,822 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:05,822 INFO [Listener at localhost.localdomain/41959] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:14:05,822 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:05,822 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:14:05,822 INFO [Listener at localhost.localdomain/41959] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:14:05,823 INFO [Listener at localhost.localdomain/41959] ipc.NettyRpcServer(120): Bind to /148.251.75.209:40919 2023-07-14 17:14:05,823 INFO [Listener at localhost.localdomain/41959] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 17:14:05,825 DEBUG [Listener at localhost.localdomain/41959] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 17:14:05,825 INFO [Listener at localhost.localdomain/41959] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:05,826 INFO [Listener at localhost.localdomain/41959] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:05,827 INFO [Listener at localhost.localdomain/41959] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40919 connecting to ZooKeeper ensemble=127.0.0.1:53758 2023-07-14 17:14:05,829 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:409190x0, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:14:05,831 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(164): regionserver:409190x0, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:14:05,831 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40919-0x1008c79ba620003 connected 2023-07-14 17:14:05,832 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(164): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:05,832 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(164): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:14:05,833 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40919 2023-07-14 17:14:05,833 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40919 2023-07-14 17:14:05,833 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40919 2023-07-14 17:14:05,834 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40919 2023-07-14 17:14:05,834 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40919 2023-07-14 17:14:05,835 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:14:05,835 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:14:05,836 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:14:05,836 INFO [Listener at localhost.localdomain/41959] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 17:14:05,836 INFO [Listener at localhost.localdomain/41959] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:14:05,836 INFO [Listener at localhost.localdomain/41959] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:14:05,836 INFO [Listener at localhost.localdomain/41959] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:14:05,837 INFO [Listener at localhost.localdomain/41959] http.HttpServer(1146): Jetty bound to port 42077 2023-07-14 17:14:05,837 INFO [Listener at localhost.localdomain/41959] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:14:05,838 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:05,838 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4d67d082{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:14:05,838 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:05,838 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3e257ad7{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:14:05,844 INFO [Listener at localhost.localdomain/41959] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:14:05,844 INFO [Listener at localhost.localdomain/41959] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:14:05,845 INFO [Listener at localhost.localdomain/41959] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:14:05,845 INFO [Listener at localhost.localdomain/41959] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 17:14:05,846 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:05,847 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@679a327e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:05,848 INFO [Listener at localhost.localdomain/41959] server.AbstractConnector(333): Started ServerConnector@65cd1930{HTTP/1.1, (http/1.1)}{0.0.0.0:42077} 2023-07-14 17:14:05,849 INFO [Listener at localhost.localdomain/41959] server.Server(415): Started @44498ms 2023-07-14 17:14:05,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:14:05,854 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@48062a1f{HTTP/1.1, (http/1.1)}{0.0.0.0:36747} 2023-07-14 17:14:05,854 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(415): Started @44503ms 2023-07-14 17:14:05,854 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,39335,1689354845688 2023-07-14 17:14:05,855 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-14 17:14:05,856 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,39335,1689354845688 2023-07-14 17:14:05,856 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 17:14:05,856 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 17:14:05,856 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 17:14:05,856 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 17:14:05,857 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:05,859 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 17:14:05,860 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,39335,1689354845688 from backup master directory 2023-07-14 17:14:05,860 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 17:14:05,861 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,39335,1689354845688 2023-07-14 17:14:05,861 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-14 17:14:05,861 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:14:05,861 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,39335,1689354845688 2023-07-14 17:14:05,878 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/hbase.id with ID: 723b4e3d-9707-48e9-89d8-6db837a1fe47 2023-07-14 17:14:05,892 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:05,894 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:05,913 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x39083b67 to 127.0.0.1:53758 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:14:05,916 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a6fdfe4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:14:05,917 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:14:05,917 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-14 17:14:05,917 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:14:05,919 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData/data/master/store-tmp 2023-07-14 17:14:05,929 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:05,929 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-14 17:14:05,929 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:05,929 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:05,929 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-14 17:14:05,929 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:05,929 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:05,929 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 17:14:05,929 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData/WALs/jenkins-hbase20.apache.org,39335,1689354845688 2023-07-14 17:14:05,932 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39335%2C1689354845688, suffix=, logDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData/WALs/jenkins-hbase20.apache.org,39335,1689354845688, archiveDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData/oldWALs, maxLogs=10 2023-07-14 17:14:05,945 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45065,DS-4215712d-f263-40b9-9298-85201cfd7727,DISK] 2023-07-14 17:14:05,946 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43591,DS-36b58b11-1385-4fc8-b76b-4adb69764848,DISK] 2023-07-14 17:14:05,947 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45875,DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d,DISK] 2023-07-14 17:14:05,949 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData/WALs/jenkins-hbase20.apache.org,39335,1689354845688/jenkins-hbase20.apache.org%2C39335%2C1689354845688.1689354845932 2023-07-14 17:14:05,949 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45065,DS-4215712d-f263-40b9-9298-85201cfd7727,DISK], DatanodeInfoWithStorage[127.0.0.1:43591,DS-36b58b11-1385-4fc8-b76b-4adb69764848,DISK], DatanodeInfoWithStorage[127.0.0.1:45875,DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d,DISK]] 2023-07-14 17:14:05,949 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:14:05,950 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:05,950 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:14:05,950 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:14:05,952 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:14:05,954 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-14 17:14:05,954 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-14 17:14:05,955 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:05,955 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:14:05,956 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:14:05,958 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 17:14:05,960 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:14:05,960 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11452305600, jitterRate=0.06657907366752625}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:14:05,960 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 17:14:05,961 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-14 17:14:05,962 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-14 17:14:05,962 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-14 17:14:05,962 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-14 17:14:05,969 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 7 msec 2023-07-14 17:14:05,970 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-14 17:14:05,970 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-14 17:14:05,970 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-14 17:14:05,971 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-14 17:14:05,972 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-14 17:14:05,972 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-14 17:14:05,973 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-14 17:14:05,974 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:05,974 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-14 17:14:05,974 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-14 17:14:05,975 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-14 17:14:05,976 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:05,976 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:05,976 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:05,976 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:05,976 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:05,976 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,39335,1689354845688, sessionid=0x1008c79ba620000, setting cluster-up flag (Was=false) 2023-07-14 17:14:05,980 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:05,982 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-14 17:14:05,982 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,39335,1689354845688 2023-07-14 17:14:05,984 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:05,986 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-14 17:14:05,987 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,39335,1689354845688 2023-07-14 17:14:05,987 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.hbase-snapshot/.tmp 2023-07-14 17:14:05,988 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-14 17:14:05,988 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-14 17:14:05,988 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-14 17:14:05,989 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 17:14:05,989 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-14 17:14:05,990 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-14 17:14:06,000 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-14 17:14:06,000 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-14 17:14:06,000 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-14 17:14:06,001 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-14 17:14:06,001 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-14 17:14:06,001 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-14 17:14:06,001 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-14 17:14:06,001 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-14 17:14:06,001 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-07-14 17:14:06,001 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,001 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:14:06,001 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,014 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689354876011 2023-07-14 17:14:06,014 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-14 17:14:06,015 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-14 17:14:06,015 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-14 17:14:06,015 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-14 17:14:06,015 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-14 17:14:06,015 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-14 17:14:06,015 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-14 17:14:06,015 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-14 17:14:06,016 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-14 17:14:06,018 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,024 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-14 17:14:06,024 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-14 17:14:06,024 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-14 17:14:06,030 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-14 17:14:06,030 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-14 17:14:06,030 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689354846030,5,FailOnTimeoutGroup] 2023-07-14 17:14:06,031 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689354846030,5,FailOnTimeoutGroup] 2023-07-14 17:14:06,031 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-14 17:14:06,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,042 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-14 17:14:06,043 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-14 17:14:06,043 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644 2023-07-14 17:14:06,055 INFO [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(951): ClusterId : 723b4e3d-9707-48e9-89d8-6db837a1fe47 2023-07-14 17:14:06,060 DEBUG [RS:0;jenkins-hbase20:45741] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 17:14:06,061 INFO [RS:1;jenkins-hbase20:43799] regionserver.HRegionServer(951): ClusterId : 723b4e3d-9707-48e9-89d8-6db837a1fe47 2023-07-14 17:14:06,061 DEBUG [RS:0;jenkins-hbase20:45741] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 17:14:06,062 DEBUG [RS:0;jenkins-hbase20:45741] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 17:14:06,065 DEBUG [RS:1;jenkins-hbase20:43799] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 17:14:06,065 DEBUG [RS:0;jenkins-hbase20:45741] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 17:14:06,067 DEBUG [RS:0;jenkins-hbase20:45741] zookeeper.ReadOnlyZKClient(139): Connect 0x35862fbf to 127.0.0.1:53758 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:14:06,067 INFO [RS:2;jenkins-hbase20:40919] regionserver.HRegionServer(951): ClusterId : 723b4e3d-9707-48e9-89d8-6db837a1fe47 2023-07-14 17:14:06,067 DEBUG [RS:2;jenkins-hbase20:40919] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 17:14:06,068 DEBUG [RS:1;jenkins-hbase20:43799] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 17:14:06,068 DEBUG [RS:1;jenkins-hbase20:43799] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 17:14:06,072 DEBUG [RS:2;jenkins-hbase20:40919] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 17:14:06,072 DEBUG [RS:2;jenkins-hbase20:40919] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 17:14:06,073 DEBUG [RS:1;jenkins-hbase20:43799] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 17:14:06,084 DEBUG [RS:2;jenkins-hbase20:40919] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 17:14:06,086 DEBUG [RS:1;jenkins-hbase20:43799] zookeeper.ReadOnlyZKClient(139): Connect 0x7119fb69 to 127.0.0.1:53758 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:14:06,086 DEBUG [RS:2;jenkins-hbase20:40919] zookeeper.ReadOnlyZKClient(139): Connect 0x696f1d33 to 127.0.0.1:53758 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:14:06,086 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:06,095 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 17:14:06,096 DEBUG [RS:0;jenkins-hbase20:45741] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b41d77f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:14:06,096 DEBUG [RS:2;jenkins-hbase20:40919] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@67a8ee6e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:14:06,096 DEBUG [RS:0;jenkins-hbase20:45741] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3f7e44cf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:14:06,096 DEBUG [RS:2;jenkins-hbase20:40919] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@63977643, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:14:06,096 DEBUG [RS:1;jenkins-hbase20:43799] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@579bb94c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:14:06,096 DEBUG [RS:1;jenkins-hbase20:43799] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@40a26c8c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:14:06,097 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/info 2023-07-14 17:14:06,098 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 17:14:06,098 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:06,098 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 17:14:06,100 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/rep_barrier 2023-07-14 17:14:06,100 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 17:14:06,101 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:06,101 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 17:14:06,102 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/table 2023-07-14 17:14:06,102 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 17:14:06,103 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:06,104 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740 2023-07-14 17:14:06,104 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740 2023-07-14 17:14:06,105 DEBUG [RS:0;jenkins-hbase20:45741] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:45741 2023-07-14 17:14:06,105 INFO [RS:0;jenkins-hbase20:45741] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 17:14:06,105 INFO [RS:0;jenkins-hbase20:45741] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 17:14:06,105 DEBUG [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 17:14:06,105 INFO [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,39335,1689354845688 with isa=jenkins-hbase20.apache.org/148.251.75.209:45741, startcode=1689354845740 2023-07-14 17:14:06,105 DEBUG [RS:0;jenkins-hbase20:45741] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 17:14:06,107 DEBUG [RS:1;jenkins-hbase20:43799] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:43799 2023-07-14 17:14:06,107 DEBUG [RS:2;jenkins-hbase20:40919] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase20:40919 2023-07-14 17:14:06,107 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 17:14:06,107 INFO [RS:2;jenkins-hbase20:40919] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 17:14:06,107 INFO [RS:2;jenkins-hbase20:40919] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 17:14:06,107 INFO [RS:1;jenkins-hbase20:43799] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 17:14:06,107 INFO [RS:1;jenkins-hbase20:43799] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 17:14:06,107 DEBUG [RS:2;jenkins-hbase20:40919] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 17:14:06,107 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:50147, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 17:14:06,107 DEBUG [RS:1;jenkins-hbase20:43799] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 17:14:06,109 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39335] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,109 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 17:14:06,110 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-14 17:14:06,110 INFO [RS:2;jenkins-hbase20:40919] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,39335,1689354845688 with isa=jenkins-hbase20.apache.org/148.251.75.209:40919, startcode=1689354845821 2023-07-14 17:14:06,110 DEBUG [RS:2;jenkins-hbase20:40919] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 17:14:06,110 INFO [RS:1;jenkins-hbase20:43799] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,39335,1689354845688 with isa=jenkins-hbase20.apache.org/148.251.75.209:43799, startcode=1689354845783 2023-07-14 17:14:06,110 DEBUG [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644 2023-07-14 17:14:06,110 DEBUG [RS:1;jenkins-hbase20:43799] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 17:14:06,110 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 17:14:06,110 DEBUG [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43505 2023-07-14 17:14:06,111 DEBUG [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43807 2023-07-14 17:14:06,112 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:06,112 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:51419, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 17:14:06,112 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39335] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:06,112 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 17:14:06,112 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-14 17:14:06,112 DEBUG [RS:0;jenkins-hbase20:45741] zookeeper.ZKUtil(162): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,112 WARN [RS:0;jenkins-hbase20:45741] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:14:06,113 DEBUG [RS:2;jenkins-hbase20:40919] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644 2023-07-14 17:14:06,113 INFO [RS:0;jenkins-hbase20:45741] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:14:06,113 DEBUG [RS:2;jenkins-hbase20:40919] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43505 2023-07-14 17:14:06,113 DEBUG [RS:2;jenkins-hbase20:40919] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43807 2023-07-14 17:14:06,113 DEBUG [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,115 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:46905, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 17:14:06,115 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39335] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:06,115 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 17:14:06,115 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-14 17:14:06,115 DEBUG [RS:2;jenkins-hbase20:40919] zookeeper.ZKUtil(162): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:06,115 WARN [RS:2;jenkins-hbase20:40919] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:14:06,116 INFO [RS:2;jenkins-hbase20:40919] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:14:06,117 DEBUG [RS:1;jenkins-hbase20:43799] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644 2023-07-14 17:14:06,117 DEBUG [RS:1;jenkins-hbase20:43799] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43505 2023-07-14 17:14:06,117 DEBUG [RS:2;jenkins-hbase20:40919] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:06,117 DEBUG [RS:1;jenkins-hbase20:43799] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43807 2023-07-14 17:14:06,117 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,40919,1689354845821] 2023-07-14 17:14:06,117 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,45741,1689354845740] 2023-07-14 17:14:06,118 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:06,118 DEBUG [RS:1;jenkins-hbase20:43799] zookeeper.ZKUtil(162): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:06,118 WARN [RS:1;jenkins-hbase20:43799] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:14:06,118 INFO [RS:1;jenkins-hbase20:43799] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:14:06,118 DEBUG [RS:1;jenkins-hbase20:43799] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:06,119 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,43799,1689354845783] 2023-07-14 17:14:06,123 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:14:06,126 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11512772320, jitterRate=0.07221047580242157}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 17:14:06,126 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 17:14:06,126 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 17:14:06,126 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 17:14:06,126 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 17:14:06,126 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 17:14:06,126 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 17:14:06,127 DEBUG [RS:0;jenkins-hbase20:45741] zookeeper.ZKUtil(162): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:06,127 DEBUG [RS:2;jenkins-hbase20:40919] zookeeper.ZKUtil(162): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:06,127 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 17:14:06,127 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 17:14:06,127 DEBUG [RS:2;jenkins-hbase20:40919] zookeeper.ZKUtil(162): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,127 DEBUG [RS:1;jenkins-hbase20:43799] zookeeper.ZKUtil(162): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:06,127 DEBUG [RS:0;jenkins-hbase20:45741] zookeeper.ZKUtil(162): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,127 DEBUG [RS:2;jenkins-hbase20:40919] zookeeper.ZKUtil(162): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:06,128 DEBUG [RS:1;jenkins-hbase20:43799] zookeeper.ZKUtil(162): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,128 DEBUG [RS:0;jenkins-hbase20:45741] zookeeper.ZKUtil(162): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:06,128 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-14 17:14:06,128 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-14 17:14:06,128 DEBUG [RS:1;jenkins-hbase20:43799] zookeeper.ZKUtil(162): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:06,128 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-14 17:14:06,128 DEBUG [RS:2;jenkins-hbase20:40919] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 17:14:06,128 INFO [RS:2;jenkins-hbase20:40919] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 17:14:06,128 DEBUG [RS:0;jenkins-hbase20:45741] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 17:14:06,129 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-14 17:14:06,130 DEBUG [RS:1;jenkins-hbase20:43799] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 17:14:06,130 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-14 17:14:06,131 INFO [RS:1;jenkins-hbase20:43799] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 17:14:06,131 INFO [RS:0;jenkins-hbase20:45741] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 17:14:06,134 INFO [RS:2;jenkins-hbase20:40919] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 17:14:06,134 INFO [RS:0;jenkins-hbase20:45741] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 17:14:06,135 INFO [RS:1;jenkins-hbase20:43799] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 17:14:06,135 INFO [RS:2;jenkins-hbase20:40919] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 17:14:06,135 INFO [RS:0;jenkins-hbase20:45741] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 17:14:06,135 INFO [RS:2;jenkins-hbase20:40919] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,135 INFO [RS:1;jenkins-hbase20:43799] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 17:14:06,135 INFO [RS:0;jenkins-hbase20:45741] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,135 INFO [RS:1;jenkins-hbase20:43799] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,135 INFO [RS:2;jenkins-hbase20:40919] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 17:14:06,136 INFO [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 17:14:06,136 INFO [RS:1;jenkins-hbase20:43799] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 17:14:06,138 INFO [RS:0;jenkins-hbase20:45741] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,138 INFO [RS:1;jenkins-hbase20:43799] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,138 DEBUG [RS:0;jenkins-hbase20:45741] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:1;jenkins-hbase20:43799] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:0;jenkins-hbase20:45741] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:1;jenkins-hbase20:43799] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:0;jenkins-hbase20:45741] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:1;jenkins-hbase20:43799] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 INFO [RS:2;jenkins-hbase20:40919] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,139 DEBUG [RS:0;jenkins-hbase20:45741] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:2;jenkins-hbase20:40919] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:1;jenkins-hbase20:43799] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:2;jenkins-hbase20:40919] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:0;jenkins-hbase20:45741] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:2;jenkins-hbase20:40919] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:0;jenkins-hbase20:45741] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:14:06,139 DEBUG [RS:1;jenkins-hbase20:43799] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:0;jenkins-hbase20:45741] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:1;jenkins-hbase20:43799] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:14:06,139 DEBUG [RS:0;jenkins-hbase20:45741] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:1;jenkins-hbase20:43799] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:0;jenkins-hbase20:45741] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,140 DEBUG [RS:1;jenkins-hbase20:43799] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,139 DEBUG [RS:2;jenkins-hbase20:40919] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,140 DEBUG [RS:1;jenkins-hbase20:43799] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,140 DEBUG [RS:2;jenkins-hbase20:40919] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,140 DEBUG [RS:0;jenkins-hbase20:45741] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,140 DEBUG [RS:2;jenkins-hbase20:40919] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:14:06,140 DEBUG [RS:1;jenkins-hbase20:43799] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,140 DEBUG [RS:2;jenkins-hbase20:40919] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,140 DEBUG [RS:2;jenkins-hbase20:40919] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,140 DEBUG [RS:2;jenkins-hbase20:40919] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,140 DEBUG [RS:2;jenkins-hbase20:40919] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:06,144 INFO [RS:2;jenkins-hbase20:40919] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,144 INFO [RS:2;jenkins-hbase20:40919] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,144 INFO [RS:2;jenkins-hbase20:40919] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,147 INFO [RS:1;jenkins-hbase20:43799] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,147 INFO [RS:1;jenkins-hbase20:43799] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,147 INFO [RS:1;jenkins-hbase20:43799] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,147 INFO [RS:0;jenkins-hbase20:45741] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,147 INFO [RS:0;jenkins-hbase20:45741] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,150 INFO [RS:0;jenkins-hbase20:45741] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,159 INFO [RS:2;jenkins-hbase20:40919] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 17:14:06,159 INFO [RS:2;jenkins-hbase20:40919] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40919,1689354845821-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,164 INFO [RS:0;jenkins-hbase20:45741] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 17:14:06,164 INFO [RS:1;jenkins-hbase20:43799] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 17:14:06,164 INFO [RS:0;jenkins-hbase20:45741] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45741,1689354845740-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,164 INFO [RS:1;jenkins-hbase20:43799] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43799,1689354845783-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,169 INFO [RS:2;jenkins-hbase20:40919] regionserver.Replication(203): jenkins-hbase20.apache.org,40919,1689354845821 started 2023-07-14 17:14:06,169 INFO [RS:2;jenkins-hbase20:40919] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,40919,1689354845821, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:40919, sessionid=0x1008c79ba620003 2023-07-14 17:14:06,169 DEBUG [RS:2;jenkins-hbase20:40919] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 17:14:06,169 DEBUG [RS:2;jenkins-hbase20:40919] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:06,169 DEBUG [RS:2;jenkins-hbase20:40919] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,40919,1689354845821' 2023-07-14 17:14:06,169 DEBUG [RS:2;jenkins-hbase20:40919] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 17:14:06,169 DEBUG [RS:2;jenkins-hbase20:40919] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 17:14:06,170 DEBUG [RS:2;jenkins-hbase20:40919] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 17:14:06,170 DEBUG [RS:2;jenkins-hbase20:40919] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 17:14:06,170 DEBUG [RS:2;jenkins-hbase20:40919] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:06,170 DEBUG [RS:2;jenkins-hbase20:40919] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,40919,1689354845821' 2023-07-14 17:14:06,170 DEBUG [RS:2;jenkins-hbase20:40919] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 17:14:06,170 DEBUG [RS:2;jenkins-hbase20:40919] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 17:14:06,170 DEBUG [RS:2;jenkins-hbase20:40919] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 17:14:06,171 INFO [RS:2;jenkins-hbase20:40919] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 17:14:06,171 INFO [RS:2;jenkins-hbase20:40919] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 17:14:06,173 INFO [RS:1;jenkins-hbase20:43799] regionserver.Replication(203): jenkins-hbase20.apache.org,43799,1689354845783 started 2023-07-14 17:14:06,174 INFO [RS:1;jenkins-hbase20:43799] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,43799,1689354845783, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:43799, sessionid=0x1008c79ba620002 2023-07-14 17:14:06,174 INFO [RS:0;jenkins-hbase20:45741] regionserver.Replication(203): jenkins-hbase20.apache.org,45741,1689354845740 started 2023-07-14 17:14:06,174 DEBUG [RS:1;jenkins-hbase20:43799] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 17:14:06,174 INFO [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,45741,1689354845740, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:45741, sessionid=0x1008c79ba620001 2023-07-14 17:14:06,174 DEBUG [RS:1;jenkins-hbase20:43799] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:06,174 DEBUG [RS:0;jenkins-hbase20:45741] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 17:14:06,174 DEBUG [RS:1;jenkins-hbase20:43799] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43799,1689354845783' 2023-07-14 17:14:06,174 DEBUG [RS:1;jenkins-hbase20:43799] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 17:14:06,174 DEBUG [RS:0;jenkins-hbase20:45741] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,174 DEBUG [RS:0;jenkins-hbase20:45741] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45741,1689354845740' 2023-07-14 17:14:06,174 DEBUG [RS:0;jenkins-hbase20:45741] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 17:14:06,174 DEBUG [RS:1;jenkins-hbase20:43799] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 17:14:06,174 DEBUG [RS:0;jenkins-hbase20:45741] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 17:14:06,174 DEBUG [RS:1;jenkins-hbase20:43799] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 17:14:06,174 DEBUG [RS:1;jenkins-hbase20:43799] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 17:14:06,174 DEBUG [RS:0;jenkins-hbase20:45741] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 17:14:06,175 DEBUG [RS:0;jenkins-hbase20:45741] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 17:14:06,175 DEBUG [RS:1;jenkins-hbase20:43799] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:06,175 DEBUG [RS:1;jenkins-hbase20:43799] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43799,1689354845783' 2023-07-14 17:14:06,175 DEBUG [RS:1;jenkins-hbase20:43799] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 17:14:06,175 DEBUG [RS:0;jenkins-hbase20:45741] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,175 DEBUG [RS:0;jenkins-hbase20:45741] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45741,1689354845740' 2023-07-14 17:14:06,175 DEBUG [RS:0;jenkins-hbase20:45741] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 17:14:06,175 DEBUG [RS:1;jenkins-hbase20:43799] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 17:14:06,175 DEBUG [RS:0;jenkins-hbase20:45741] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 17:14:06,175 DEBUG [RS:1;jenkins-hbase20:43799] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 17:14:06,175 INFO [RS:1;jenkins-hbase20:43799] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 17:14:06,175 INFO [RS:1;jenkins-hbase20:43799] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 17:14:06,175 DEBUG [RS:0;jenkins-hbase20:45741] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 17:14:06,175 INFO [RS:0;jenkins-hbase20:45741] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 17:14:06,175 INFO [RS:0;jenkins-hbase20:45741] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 17:14:06,272 INFO [RS:2;jenkins-hbase20:40919] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C40919%2C1689354845821, suffix=, logDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,40919,1689354845821, archiveDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/oldWALs, maxLogs=32 2023-07-14 17:14:06,277 INFO [RS:1;jenkins-hbase20:43799] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43799%2C1689354845783, suffix=, logDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,43799,1689354845783, archiveDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/oldWALs, maxLogs=32 2023-07-14 17:14:06,277 INFO [RS:0;jenkins-hbase20:45741] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45741%2C1689354845740, suffix=, logDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,45741,1689354845740, archiveDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/oldWALs, maxLogs=32 2023-07-14 17:14:06,280 DEBUG [jenkins-hbase20:39335] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-14 17:14:06,281 DEBUG [jenkins-hbase20:39335] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:14:06,281 DEBUG [jenkins-hbase20:39335] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:14:06,281 DEBUG [jenkins-hbase20:39335] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:14:06,281 DEBUG [jenkins-hbase20:39335] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:14:06,281 DEBUG [jenkins-hbase20:39335] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:14:06,282 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,45741,1689354845740, state=OPENING 2023-07-14 17:14:06,283 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-14 17:14:06,284 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:06,284 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 17:14:06,284 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,45741,1689354845740}] 2023-07-14 17:14:06,299 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45065,DS-4215712d-f263-40b9-9298-85201cfd7727,DISK] 2023-07-14 17:14:06,299 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45875,DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d,DISK] 2023-07-14 17:14:06,299 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43591,DS-36b58b11-1385-4fc8-b76b-4adb69764848,DISK] 2023-07-14 17:14:06,301 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43591,DS-36b58b11-1385-4fc8-b76b-4adb69764848,DISK] 2023-07-14 17:14:06,304 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45875,DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d,DISK] 2023-07-14 17:14:06,305 WARN [ReadOnlyZKClient-127.0.0.1:53758@0x39083b67] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-14 17:14:06,306 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39335,1689354845688] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:14:06,306 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45065,DS-4215712d-f263-40b9-9298-85201cfd7727,DISK] 2023-07-14 17:14:06,307 INFO [RS:2;jenkins-hbase20:40919] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,40919,1689354845821/jenkins-hbase20.apache.org%2C40919%2C1689354845821.1689354846273 2023-07-14 17:14:06,313 DEBUG [RS:2;jenkins-hbase20:40919] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45065,DS-4215712d-f263-40b9-9298-85201cfd7727,DISK], DatanodeInfoWithStorage[127.0.0.1:43591,DS-36b58b11-1385-4fc8-b76b-4adb69764848,DISK], DatanodeInfoWithStorage[127.0.0.1:45875,DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d,DISK]] 2023-07-14 17:14:06,313 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45875,DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d,DISK] 2023-07-14 17:14:06,313 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45065,DS-4215712d-f263-40b9-9298-85201cfd7727,DISK] 2023-07-14 17:14:06,313 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:33378, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:14:06,313 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43591,DS-36b58b11-1385-4fc8-b76b-4adb69764848,DISK] 2023-07-14 17:14:06,314 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45741] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 148.251.75.209:33378 deadline: 1689354906313, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,314 INFO [RS:1;jenkins-hbase20:43799] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,43799,1689354845783/jenkins-hbase20.apache.org%2C43799%2C1689354845783.1689354846277 2023-07-14 17:14:06,314 DEBUG [RS:1;jenkins-hbase20:43799] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43591,DS-36b58b11-1385-4fc8-b76b-4adb69764848,DISK], DatanodeInfoWithStorage[127.0.0.1:45875,DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d,DISK], DatanodeInfoWithStorage[127.0.0.1:45065,DS-4215712d-f263-40b9-9298-85201cfd7727,DISK]] 2023-07-14 17:14:06,319 INFO [RS:0;jenkins-hbase20:45741] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,45741,1689354845740/jenkins-hbase20.apache.org%2C45741%2C1689354845740.1689354846278 2023-07-14 17:14:06,322 DEBUG [RS:0;jenkins-hbase20:45741] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45875,DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d,DISK], DatanodeInfoWithStorage[127.0.0.1:43591,DS-36b58b11-1385-4fc8-b76b-4adb69764848,DISK], DatanodeInfoWithStorage[127.0.0.1:45065,DS-4215712d-f263-40b9-9298-85201cfd7727,DISK]] 2023-07-14 17:14:06,409 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-14 17:14:06,440 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,442 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 17:14:06,444 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:33390, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 17:14:06,449 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-14 17:14:06,449 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:14:06,451 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45741%2C1689354845740.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,45741,1689354845740, archiveDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/oldWALs, maxLogs=32 2023-07-14 17:14:06,468 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45065,DS-4215712d-f263-40b9-9298-85201cfd7727,DISK] 2023-07-14 17:14:06,468 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45875,DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d,DISK] 2023-07-14 17:14:06,468 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43591,DS-36b58b11-1385-4fc8-b76b-4adb69764848,DISK] 2023-07-14 17:14:06,471 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,45741,1689354845740/jenkins-hbase20.apache.org%2C45741%2C1689354845740.meta.1689354846451.meta 2023-07-14 17:14:06,473 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43591,DS-36b58b11-1385-4fc8-b76b-4adb69764848,DISK], DatanodeInfoWithStorage[127.0.0.1:45065,DS-4215712d-f263-40b9-9298-85201cfd7727,DISK], DatanodeInfoWithStorage[127.0.0.1:45875,DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d,DISK]] 2023-07-14 17:14:06,473 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:14:06,474 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 17:14:06,474 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-14 17:14:06,475 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-14 17:14:06,475 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-14 17:14:06,475 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:06,476 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-14 17:14:06,476 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-14 17:14:06,479 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 17:14:06,482 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/info 2023-07-14 17:14:06,482 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/info 2023-07-14 17:14:06,483 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 17:14:06,484 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:06,484 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 17:14:06,485 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/rep_barrier 2023-07-14 17:14:06,485 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/rep_barrier 2023-07-14 17:14:06,486 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 17:14:06,486 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:06,486 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 17:14:06,488 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/table 2023-07-14 17:14:06,489 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/table 2023-07-14 17:14:06,489 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 17:14:06,490 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:06,493 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740 2023-07-14 17:14:06,494 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740 2023-07-14 17:14:06,497 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 17:14:06,499 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 17:14:06,500 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10989385120, jitterRate=0.023466244339942932}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 17:14:06,500 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 17:14:06,502 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689354846440 2023-07-14 17:14:06,511 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-14 17:14:06,511 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,45741,1689354845740, state=OPEN 2023-07-14 17:14:06,513 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-14 17:14:06,515 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 17:14:06,515 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 17:14:06,518 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-14 17:14:06,518 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,45741,1689354845740 in 231 msec 2023-07-14 17:14:06,522 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-14 17:14:06,522 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 390 msec 2023-07-14 17:14:06,525 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 533 msec 2023-07-14 17:14:06,526 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689354846526, completionTime=-1 2023-07-14 17:14:06,526 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-14 17:14:06,526 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-14 17:14:06,530 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-14 17:14:06,530 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689354906530 2023-07-14 17:14:06,530 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689354966530 2023-07-14 17:14:06,530 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-14 17:14:06,535 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39335,1689354845688-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,535 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39335,1689354845688-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,535 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39335,1689354845688-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,535 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:39335, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,535 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:06,535 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-14 17:14:06,535 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-14 17:14:06,536 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-14 17:14:06,537 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-14 17:14:06,538 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:14:06,538 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:14:06,539 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/hbase/namespace/c02f06fa368297001184995300a52985 2023-07-14 17:14:06,540 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/hbase/namespace/c02f06fa368297001184995300a52985 empty. 2023-07-14 17:14:06,540 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/hbase/namespace/c02f06fa368297001184995300a52985 2023-07-14 17:14:06,540 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-14 17:14:06,551 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-14 17:14:06,552 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c02f06fa368297001184995300a52985, NAME => 'hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp 2023-07-14 17:14:06,561 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:06,561 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c02f06fa368297001184995300a52985, disabling compactions & flushes 2023-07-14 17:14:06,561 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. 2023-07-14 17:14:06,561 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. 2023-07-14 17:14:06,561 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. after waiting 0 ms 2023-07-14 17:14:06,561 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. 2023-07-14 17:14:06,562 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. 2023-07-14 17:14:06,562 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c02f06fa368297001184995300a52985: 2023-07-14 17:14:06,564 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:14:06,565 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689354846565"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354846565"}]},"ts":"1689354846565"} 2023-07-14 17:14:06,567 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:14:06,568 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:14:06,568 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354846568"}]},"ts":"1689354846568"} 2023-07-14 17:14:06,569 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-14 17:14:06,571 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:14:06,571 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:14:06,571 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:14:06,571 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:14:06,571 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:14:06,571 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c02f06fa368297001184995300a52985, ASSIGN}] 2023-07-14 17:14:06,573 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c02f06fa368297001184995300a52985, ASSIGN 2023-07-14 17:14:06,574 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c02f06fa368297001184995300a52985, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,45741,1689354845740; forceNewPlan=false, retain=false 2023-07-14 17:14:06,618 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39335,1689354845688] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:14:06,621 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39335,1689354845688] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-14 17:14:06,624 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:14:06,625 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:14:06,628 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/hbase/rsgroup/fd011a1ad1184dc66ae1fa5bcb4bbecd 2023-07-14 17:14:06,629 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/hbase/rsgroup/fd011a1ad1184dc66ae1fa5bcb4bbecd empty. 2023-07-14 17:14:06,630 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/hbase/rsgroup/fd011a1ad1184dc66ae1fa5bcb4bbecd 2023-07-14 17:14:06,630 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-14 17:14:06,643 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-14 17:14:06,644 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => fd011a1ad1184dc66ae1fa5bcb4bbecd, NAME => 'hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp 2023-07-14 17:14:06,651 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:06,652 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing fd011a1ad1184dc66ae1fa5bcb4bbecd, disabling compactions & flushes 2023-07-14 17:14:06,652 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. 2023-07-14 17:14:06,652 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. 2023-07-14 17:14:06,652 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. after waiting 0 ms 2023-07-14 17:14:06,652 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. 2023-07-14 17:14:06,652 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. 2023-07-14 17:14:06,652 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for fd011a1ad1184dc66ae1fa5bcb4bbecd: 2023-07-14 17:14:06,654 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:14:06,654 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689354846654"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354846654"}]},"ts":"1689354846654"} 2023-07-14 17:14:06,655 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:14:06,656 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:14:06,656 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354846656"}]},"ts":"1689354846656"} 2023-07-14 17:14:06,657 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-14 17:14:06,659 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:14:06,659 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:14:06,659 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:14:06,659 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:14:06,659 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:14:06,659 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=fd011a1ad1184dc66ae1fa5bcb4bbecd, ASSIGN}] 2023-07-14 17:14:06,660 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=fd011a1ad1184dc66ae1fa5bcb4bbecd, ASSIGN 2023-07-14 17:14:06,661 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=fd011a1ad1184dc66ae1fa5bcb4bbecd, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,45741,1689354845740; forceNewPlan=false, retain=false 2023-07-14 17:14:06,661 INFO [jenkins-hbase20:39335] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-14 17:14:06,663 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c02f06fa368297001184995300a52985, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,663 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=fd011a1ad1184dc66ae1fa5bcb4bbecd, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,663 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689354846663"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354846663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354846663"}]},"ts":"1689354846663"} 2023-07-14 17:14:06,663 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689354846663"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354846663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354846663"}]},"ts":"1689354846663"} 2023-07-14 17:14:06,665 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure fd011a1ad1184dc66ae1fa5bcb4bbecd, server=jenkins-hbase20.apache.org,45741,1689354845740}] 2023-07-14 17:14:06,667 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=5, state=RUNNABLE; OpenRegionProcedure c02f06fa368297001184995300a52985, server=jenkins-hbase20.apache.org,45741,1689354845740}] 2023-07-14 17:14:06,821 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. 2023-07-14 17:14:06,821 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c02f06fa368297001184995300a52985, NAME => 'hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:14:06,821 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c02f06fa368297001184995300a52985 2023-07-14 17:14:06,821 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:06,821 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for c02f06fa368297001184995300a52985 2023-07-14 17:14:06,821 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for c02f06fa368297001184995300a52985 2023-07-14 17:14:06,823 INFO [StoreOpener-c02f06fa368297001184995300a52985-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c02f06fa368297001184995300a52985 2023-07-14 17:14:06,824 DEBUG [StoreOpener-c02f06fa368297001184995300a52985-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/namespace/c02f06fa368297001184995300a52985/info 2023-07-14 17:14:06,824 DEBUG [StoreOpener-c02f06fa368297001184995300a52985-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/namespace/c02f06fa368297001184995300a52985/info 2023-07-14 17:14:06,824 INFO [StoreOpener-c02f06fa368297001184995300a52985-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c02f06fa368297001184995300a52985 columnFamilyName info 2023-07-14 17:14:06,825 INFO [StoreOpener-c02f06fa368297001184995300a52985-1] regionserver.HStore(310): Store=c02f06fa368297001184995300a52985/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:06,826 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/namespace/c02f06fa368297001184995300a52985 2023-07-14 17:14:06,826 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/namespace/c02f06fa368297001184995300a52985 2023-07-14 17:14:06,829 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for c02f06fa368297001184995300a52985 2023-07-14 17:14:06,832 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/namespace/c02f06fa368297001184995300a52985/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:14:06,832 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened c02f06fa368297001184995300a52985; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10489294720, jitterRate=-0.023108303546905518}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:14:06,832 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for c02f06fa368297001184995300a52985: 2023-07-14 17:14:06,834 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985., pid=9, masterSystemTime=1689354846817 2023-07-14 17:14:06,836 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. 2023-07-14 17:14:06,837 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. 2023-07-14 17:14:06,837 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. 2023-07-14 17:14:06,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fd011a1ad1184dc66ae1fa5bcb4bbecd, NAME => 'hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:14:06,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 17:14:06,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. service=MultiRowMutationService 2023-07-14 17:14:06,837 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-14 17:14:06,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup fd011a1ad1184dc66ae1fa5bcb4bbecd 2023-07-14 17:14:06,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:06,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for fd011a1ad1184dc66ae1fa5bcb4bbecd 2023-07-14 17:14:06,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for fd011a1ad1184dc66ae1fa5bcb4bbecd 2023-07-14 17:14:06,838 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c02f06fa368297001184995300a52985, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,838 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689354846838"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354846838"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354846838"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354846838"}]},"ts":"1689354846838"} 2023-07-14 17:14:06,839 INFO [StoreOpener-fd011a1ad1184dc66ae1fa5bcb4bbecd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region fd011a1ad1184dc66ae1fa5bcb4bbecd 2023-07-14 17:14:06,841 DEBUG [StoreOpener-fd011a1ad1184dc66ae1fa5bcb4bbecd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/rsgroup/fd011a1ad1184dc66ae1fa5bcb4bbecd/m 2023-07-14 17:14:06,841 DEBUG [StoreOpener-fd011a1ad1184dc66ae1fa5bcb4bbecd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/rsgroup/fd011a1ad1184dc66ae1fa5bcb4bbecd/m 2023-07-14 17:14:06,841 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=5 2023-07-14 17:14:06,841 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=5, state=SUCCESS; OpenRegionProcedure c02f06fa368297001184995300a52985, server=jenkins-hbase20.apache.org,45741,1689354845740 in 173 msec 2023-07-14 17:14:06,842 INFO [StoreOpener-fd011a1ad1184dc66ae1fa5bcb4bbecd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fd011a1ad1184dc66ae1fa5bcb4bbecd columnFamilyName m 2023-07-14 17:14:06,843 INFO [StoreOpener-fd011a1ad1184dc66ae1fa5bcb4bbecd-1] regionserver.HStore(310): Store=fd011a1ad1184dc66ae1fa5bcb4bbecd/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:06,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-14 17:14:06,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c02f06fa368297001184995300a52985, ASSIGN in 270 msec 2023-07-14 17:14:06,843 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/rsgroup/fd011a1ad1184dc66ae1fa5bcb4bbecd 2023-07-14 17:14:06,844 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/rsgroup/fd011a1ad1184dc66ae1fa5bcb4bbecd 2023-07-14 17:14:06,844 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:14:06,844 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354846844"}]},"ts":"1689354846844"} 2023-07-14 17:14:06,845 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-14 17:14:06,848 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:14:06,849 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for fd011a1ad1184dc66ae1fa5bcb4bbecd 2023-07-14 17:14:06,850 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 313 msec 2023-07-14 17:14:06,851 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/rsgroup/fd011a1ad1184dc66ae1fa5bcb4bbecd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:14:06,852 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened fd011a1ad1184dc66ae1fa5bcb4bbecd; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@50df6019, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:14:06,852 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for fd011a1ad1184dc66ae1fa5bcb4bbecd: 2023-07-14 17:14:06,853 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd., pid=8, masterSystemTime=1689354846817 2023-07-14 17:14:06,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. 2023-07-14 17:14:06,855 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. 2023-07-14 17:14:06,855 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=fd011a1ad1184dc66ae1fa5bcb4bbecd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:06,855 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689354846855"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354846855"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354846855"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354846855"}]},"ts":"1689354846855"} 2023-07-14 17:14:06,858 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-14 17:14:06,858 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure fd011a1ad1184dc66ae1fa5bcb4bbecd, server=jenkins-hbase20.apache.org,45741,1689354845740 in 192 msec 2023-07-14 17:14:06,859 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-14 17:14:06,859 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=fd011a1ad1184dc66ae1fa5bcb4bbecd, ASSIGN in 199 msec 2023-07-14 17:14:06,860 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:14:06,860 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354846860"}]},"ts":"1689354846860"} 2023-07-14 17:14:06,862 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-14 17:14:06,865 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:14:06,867 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 248 msec 2023-07-14 17:14:06,925 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-14 17:14:06,925 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-14 17:14:06,930 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:06,930 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:06,932 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 17:14:06,933 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-14 17:14:06,937 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-14 17:14:06,944 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-14 17:14:06,945 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:06,948 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-14 17:14:06,957 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 17:14:06,960 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 14 msec 2023-07-14 17:14:06,970 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-14 17:14:06,979 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 17:14:06,981 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-07-14 17:14:06,994 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-14 17:14:06,995 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-14 17:14:06,996 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.134sec 2023-07-14 17:14:06,997 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-14 17:14:06,997 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-14 17:14:06,997 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-14 17:14:06,997 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39335,1689354845688-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-14 17:14:06,997 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39335,1689354845688-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-14 17:14:07,005 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-14 17:14:07,066 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ReadOnlyZKClient(139): Connect 0x04a53f68 to 127.0.0.1:53758 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:14:07,087 DEBUG [Listener at localhost.localdomain/41959] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8db9719, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:14:07,092 DEBUG [hconnection-0x3ee45dbe-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:14:07,094 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:33398, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:14:07,095 INFO [Listener at localhost.localdomain/41959] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,39335,1689354845688 2023-07-14 17:14:07,096 INFO [Listener at localhost.localdomain/41959] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:07,099 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-14 17:14:07,101 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38230, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-14 17:14:07,115 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-14 17:14:07,116 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:07,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=false 2023-07-14 17:14:07,117 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ReadOnlyZKClient(139): Connect 0x59afc043 to 127.0.0.1:53758 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:14:07,125 DEBUG [Listener at localhost.localdomain/41959] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45ba6f65, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:14:07,125 INFO [Listener at localhost.localdomain/41959] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:53758 2023-07-14 17:14:07,128 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:14:07,130 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1008c79ba62000a connected 2023-07-14 17:14:07,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:07,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:07,138 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-14 17:14:07,148 INFO [Listener at localhost.localdomain/41959] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-14 17:14:07,149 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:07,149 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:07,149 INFO [Listener at localhost.localdomain/41959] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 17:14:07,149 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 17:14:07,149 INFO [Listener at localhost.localdomain/41959] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 17:14:07,149 INFO [Listener at localhost.localdomain/41959] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 17:14:07,150 INFO [Listener at localhost.localdomain/41959] ipc.NettyRpcServer(120): Bind to /148.251.75.209:35249 2023-07-14 17:14:07,150 INFO [Listener at localhost.localdomain/41959] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 17:14:07,151 DEBUG [Listener at localhost.localdomain/41959] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 17:14:07,152 INFO [Listener at localhost.localdomain/41959] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:07,153 INFO [Listener at localhost.localdomain/41959] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 17:14:07,154 INFO [Listener at localhost.localdomain/41959] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35249 connecting to ZooKeeper ensemble=127.0.0.1:53758 2023-07-14 17:14:07,157 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:352490x0, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 17:14:07,161 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(162): regionserver:352490x0, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 17:14:07,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35249-0x1008c79ba62000b connected 2023-07-14 17:14:07,163 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(162): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-14 17:14:07,164 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ZKUtil(164): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 17:14:07,164 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35249 2023-07-14 17:14:07,164 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35249 2023-07-14 17:14:07,165 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35249 2023-07-14 17:14:07,165 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35249 2023-07-14 17:14:07,165 DEBUG [Listener at localhost.localdomain/41959] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35249 2023-07-14 17:14:07,167 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 17:14:07,167 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 17:14:07,167 INFO [Listener at localhost.localdomain/41959] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 17:14:07,167 INFO [Listener at localhost.localdomain/41959] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 17:14:07,167 INFO [Listener at localhost.localdomain/41959] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 17:14:07,167 INFO [Listener at localhost.localdomain/41959] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 17:14:07,168 INFO [Listener at localhost.localdomain/41959] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 17:14:07,168 INFO [Listener at localhost.localdomain/41959] http.HttpServer(1146): Jetty bound to port 32931 2023-07-14 17:14:07,168 INFO [Listener at localhost.localdomain/41959] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 17:14:07,169 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:07,169 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ee67f7f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/hadoop.log.dir/,AVAILABLE} 2023-07-14 17:14:07,170 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:07,170 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7b8742d0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-14 17:14:07,175 INFO [Listener at localhost.localdomain/41959] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 17:14:07,176 INFO [Listener at localhost.localdomain/41959] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 17:14:07,176 INFO [Listener at localhost.localdomain/41959] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 17:14:07,177 INFO [Listener at localhost.localdomain/41959] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 17:14:07,178 INFO [Listener at localhost.localdomain/41959] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 17:14:07,179 INFO [Listener at localhost.localdomain/41959] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@396b9af3{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:07,181 INFO [Listener at localhost.localdomain/41959] server.AbstractConnector(333): Started ServerConnector@37afaada{HTTP/1.1, (http/1.1)}{0.0.0.0:32931} 2023-07-14 17:14:07,181 INFO [Listener at localhost.localdomain/41959] server.Server(415): Started @45830ms 2023-07-14 17:14:07,184 INFO [RS:3;jenkins-hbase20:35249] regionserver.HRegionServer(951): ClusterId : 723b4e3d-9707-48e9-89d8-6db837a1fe47 2023-07-14 17:14:07,184 DEBUG [RS:3;jenkins-hbase20:35249] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 17:14:07,185 DEBUG [RS:3;jenkins-hbase20:35249] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 17:14:07,185 DEBUG [RS:3;jenkins-hbase20:35249] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 17:14:07,186 DEBUG [RS:3;jenkins-hbase20:35249] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 17:14:07,188 DEBUG [RS:3;jenkins-hbase20:35249] zookeeper.ReadOnlyZKClient(139): Connect 0x51111ef7 to 127.0.0.1:53758 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 17:14:07,196 DEBUG [RS:3;jenkins-hbase20:35249] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5fd6eaa5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 17:14:07,196 DEBUG [RS:3;jenkins-hbase20:35249] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5be97587, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:14:07,204 DEBUG [RS:3;jenkins-hbase20:35249] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase20:35249 2023-07-14 17:14:07,204 INFO [RS:3;jenkins-hbase20:35249] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 17:14:07,205 INFO [RS:3;jenkins-hbase20:35249] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 17:14:07,205 DEBUG [RS:3;jenkins-hbase20:35249] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 17:14:07,205 INFO [RS:3;jenkins-hbase20:35249] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,39335,1689354845688 with isa=jenkins-hbase20.apache.org/148.251.75.209:35249, startcode=1689354847148 2023-07-14 17:14:07,205 DEBUG [RS:3;jenkins-hbase20:35249] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 17:14:07,208 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:45239, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 17:14:07,209 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39335] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:07,209 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 17:14:07,209 DEBUG [RS:3;jenkins-hbase20:35249] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644 2023-07-14 17:14:07,209 DEBUG [RS:3;jenkins-hbase20:35249] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43505 2023-07-14 17:14:07,209 DEBUG [RS:3;jenkins-hbase20:35249] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43807 2023-07-14 17:14:07,212 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:07,212 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:07,212 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:07,212 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:07,212 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:07,212 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 17:14:07,212 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,35249,1689354847148] 2023-07-14 17:14:07,212 DEBUG [RS:3;jenkins-hbase20:35249] zookeeper.ZKUtil(162): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:07,212 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:07,213 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:07,213 WARN [RS:3;jenkins-hbase20:35249] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 17:14:07,213 INFO [RS:3;jenkins-hbase20:35249] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 17:14:07,213 DEBUG [RS:3;jenkins-hbase20:35249] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:07,214 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:07,214 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39335,1689354845688] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-14 17:14:07,214 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:07,214 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:07,214 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:07,214 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:07,214 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:07,215 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:07,216 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:07,216 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:07,216 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:07,230 DEBUG [RS:3;jenkins-hbase20:35249] zookeeper.ZKUtil(162): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:07,231 DEBUG [RS:3;jenkins-hbase20:35249] zookeeper.ZKUtil(162): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:07,231 DEBUG [RS:3;jenkins-hbase20:35249] zookeeper.ZKUtil(162): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:07,232 DEBUG [RS:3;jenkins-hbase20:35249] zookeeper.ZKUtil(162): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:07,233 DEBUG [RS:3;jenkins-hbase20:35249] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 17:14:07,233 INFO [RS:3;jenkins-hbase20:35249] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 17:14:07,234 INFO [RS:3;jenkins-hbase20:35249] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 17:14:07,234 INFO [RS:3;jenkins-hbase20:35249] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 17:14:07,235 INFO [RS:3;jenkins-hbase20:35249] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:07,235 INFO [RS:3;jenkins-hbase20:35249] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 17:14:07,237 INFO [RS:3;jenkins-hbase20:35249] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:07,238 DEBUG [RS:3;jenkins-hbase20:35249] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:07,238 DEBUG [RS:3;jenkins-hbase20:35249] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:07,238 DEBUG [RS:3;jenkins-hbase20:35249] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:07,238 DEBUG [RS:3;jenkins-hbase20:35249] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:07,238 DEBUG [RS:3;jenkins-hbase20:35249] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:07,238 DEBUG [RS:3;jenkins-hbase20:35249] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-14 17:14:07,238 DEBUG [RS:3;jenkins-hbase20:35249] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:07,238 DEBUG [RS:3;jenkins-hbase20:35249] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:07,238 DEBUG [RS:3;jenkins-hbase20:35249] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:07,238 DEBUG [RS:3;jenkins-hbase20:35249] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-14 17:14:07,245 INFO [RS:3;jenkins-hbase20:35249] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:07,245 INFO [RS:3;jenkins-hbase20:35249] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:07,245 INFO [RS:3;jenkins-hbase20:35249] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:07,257 INFO [RS:3;jenkins-hbase20:35249] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 17:14:07,257 INFO [RS:3;jenkins-hbase20:35249] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35249,1689354847148-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 17:14:07,268 INFO [RS:3;jenkins-hbase20:35249] regionserver.Replication(203): jenkins-hbase20.apache.org,35249,1689354847148 started 2023-07-14 17:14:07,268 INFO [RS:3;jenkins-hbase20:35249] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,35249,1689354847148, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:35249, sessionid=0x1008c79ba62000b 2023-07-14 17:14:07,268 DEBUG [RS:3;jenkins-hbase20:35249] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 17:14:07,268 DEBUG [RS:3;jenkins-hbase20:35249] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:07,268 DEBUG [RS:3;jenkins-hbase20:35249] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,35249,1689354847148' 2023-07-14 17:14:07,268 DEBUG [RS:3;jenkins-hbase20:35249] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 17:14:07,269 DEBUG [RS:3;jenkins-hbase20:35249] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 17:14:07,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:14:07,269 DEBUG [RS:3;jenkins-hbase20:35249] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 17:14:07,269 DEBUG [RS:3;jenkins-hbase20:35249] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 17:14:07,269 DEBUG [RS:3;jenkins-hbase20:35249] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:07,269 DEBUG [RS:3;jenkins-hbase20:35249] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,35249,1689354847148' 2023-07-14 17:14:07,270 DEBUG [RS:3;jenkins-hbase20:35249] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 17:14:07,270 DEBUG [RS:3;jenkins-hbase20:35249] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 17:14:07,271 DEBUG [RS:3;jenkins-hbase20:35249] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 17:14:07,271 INFO [RS:3;jenkins-hbase20:35249] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 17:14:07,271 INFO [RS:3;jenkins-hbase20:35249] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 17:14:07,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:07,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:07,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:14:07,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:14:07,277 DEBUG [hconnection-0x3b6a61d6-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 17:14:07,282 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:33410, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 17:14:07,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:07,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:07,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:39335] to rsgroup master 2023-07-14 17:14:07,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:14:07,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:38230 deadline: 1689356047292, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. 2023-07-14 17:14:07,293 WARN [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:14:07,297 INFO [Listener at localhost.localdomain/41959] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:07,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:07,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:07,299 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35249, jenkins-hbase20.apache.org:40919, jenkins-hbase20.apache.org:43799, jenkins-hbase20.apache.org:45741], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:14:07,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:14:07,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:14:07,374 INFO [RS:3;jenkins-hbase20:35249] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C35249%2C1689354847148, suffix=, logDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,35249,1689354847148, archiveDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/oldWALs, maxLogs=32 2023-07-14 17:14:07,385 INFO [Listener at localhost.localdomain/41959] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=559 (was 521) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/41959-SendThread(127.0.0.1:53758) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:3;jenkins-hbase20:35249 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x04a53f68-SendThread(127.0.0.1:53758) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1180047441-2303-acceptor-0@6325de8d-ServerConnector@4d9fdcd9{HTTP/1.1, (http/1.1)}{0.0.0.0:37211} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@7f3445f9[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-147832095-148.251.75.209-1689354845074 heartbeating to localhost.localdomain/127.0.0.1:43505 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1049063973-2613 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: CacheReplicationMonitor(479184026) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: qtp173761295-2278 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689354846030 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: qtp1187110071-2348 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp370907958-2333-acceptor-0@6d5682b0-ServerConnector@65cd1930{HTTP/1.1, (http/1.1)}{0.0.0.0:42077} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/39045-SendThread(127.0.0.1:56537) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: qtp1187110071-2350 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 41959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-556-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x7119fb69-SendThread(127.0.0.1:53758) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@53ed0b1e sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 43505 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1980485814_17 at /127.0.0.1:48274 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x35862fbf-SendThread(127.0.0.1:53758) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-824354788_17 at /127.0.0.1:60270 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x39083b67-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1049063973-2609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x7119fb69 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/849091510.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 41897 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x3b6a61d6-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3b6a61d6-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase20:35249-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 40387 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp370907958-2339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/41959-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data2/current/BP-147832095-148.251.75.209-1689354845074 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x696f1d33 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/849091510.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1980485814_17 at /127.0.0.1:60298 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/41959-SendThread(127.0.0.1:53758) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase20:39335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39335,1689354845688 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: IPC Server idle connection scanner for port 40387 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost.localdomain/41959-SendThread(127.0.0.1:53758) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_187058629_17 at /127.0.0.1:60240 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp370907958-2336 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1049063973-2610 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x59afc043-SendThread(127.0.0.1:53758) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,44713,1689354839774 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1187110071-2347-acceptor-0@5bf4acb2-ServerConnector@48062a1f{HTTP/1.1, (http/1.1)}{0.0.0.0:36747} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1980485814_17 at /127.0.0.1:48262 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (943951222) connection to localhost.localdomain/127.0.0.1:38043 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644-prefix:jenkins-hbase20.apache.org,45741,1689354845740 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data6/current/BP-147832095-148.251.75.209-1689354845074 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData-prefix:jenkins-hbase20.apache.org,39335,1689354845688 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1759521972-2244 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 87248363@qtp-405293291-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33755 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: hconnection-0xd620467-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-824354788_17 at /127.0.0.1:52662 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x7119fb69-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1180047441-2305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase20:40919-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-147832095-148.251.75.209-1689354845074 heartbeating to localhost.localdomain/127.0.0.1:43505 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1180047441-2302 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1647920887.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase20:45741-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:43505 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 41959 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost.localdomain/41959-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 3 on default port 41959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 0 on default port 43505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost.localdomain:38043 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/41959.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644-prefix:jenkins-hbase20.apache.org,43799,1689354845783 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1980485814_17 at /127.0.0.1:52686 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689354846030 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: IPC Server handler 4 on default port 40387 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-24b41a24-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:0;jenkins-hbase20:45741 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data1/current/BP-147832095-148.251.75.209-1689354845074 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@18eeec0f java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (943951222) connection to localhost.localdomain/127.0.0.1:43505 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp173761295-2274 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xd620467-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1049063973-2614 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 299797634@qtp-757478613-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34929 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/41959-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56537@0x1c8ca688-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0xd620467-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_187058629_17 at /127.0.0.1:52638 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=45741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1981818142_17 at /127.0.0.1:48258 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-147832095-148.251.75.209-1689354845074 heartbeating to localhost.localdomain/127.0.0.1:43505 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1180047441-2308 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@6873168d java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-824354788_17 at /127.0.0.1:48244 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/41959-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56537@0x1c8ca688-SendThread(127.0.0.1:56537) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (943951222) connection to localhost.localdomain/127.0.0.1:43505 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/41959.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56537@0x1c8ca688 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/849091510.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@3099d3ab java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@134e58a1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:45741Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp370907958-2338 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp173761295-2272 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1647920887.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@3ef71437 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 41897 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp1759521972-2245 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3cd65c67 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1759521972-2242-acceptor-0@2b222c01-ServerConnector@392abe7f{HTTP/1.1, (http/1.1)}{0.0.0.0:43807} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1187110071-2343 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1647920887.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 40387 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1180047441-2309 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1049063973-2612 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/41959-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x35862fbf-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (943951222) connection to localhost.localdomain/127.0.0.1:43505 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: 1745047005@qtp-405293291-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp1187110071-2346 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1647920887.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1980485814_17 at /127.0.0.1:52702 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1980485814_17 at /127.0.0.1:48178 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1049063973-2608-acceptor-0@4a216ba4-ServerConnector@37afaada{HTTP/1.1, (http/1.1)}{0.0.0.0:32931} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xd620467-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@62f99981 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@797cbb6b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp173761295-2277 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (943951222) connection to localhost.localdomain/127.0.0.1:38043 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: pool-545-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x04a53f68-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1049063973-2607 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1647920887.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7642272-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1759521972-2246 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 41897 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost.localdomain/41959-SendThread(127.0.0.1:53758) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 1716361342@qtp-1911464360-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:33073 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1759521972-2243 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 43505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@ee9a820[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost.localdomain:43505 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/39045-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1980485814_17 at /127.0.0.1:60294 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/41959.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp173761295-2279 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase20:43799 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 41959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: globalEventExecutor-1-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) io.netty.util.concurrent.GlobalEventExecutor.takeTask(GlobalEventExecutor.java:95) io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:239) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_187058629_17 at /127.0.0.1:60214 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp370907958-2337 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@1e314466 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1180047441-2307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 41959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:2;jenkins-hbase20:40919 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@73940b73 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data3/current/BP-147832095-148.251.75.209-1689354845074 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=45741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (943951222) connection to localhost.localdomain/127.0.0.1:38043 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Session-HouseKeeper-4ac65f6d-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1187110071-2345 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1647920887.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/41959 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (943951222) connection to localhost.localdomain/127.0.0.1:38043 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:43799Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1759521972-2241 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1647920887.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 585608315@qtp-1512793685-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:40919Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost.localdomain:38043 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp173761295-2275 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-22c8d31b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-565-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-550-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase20:39335 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-65a619a8-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp370907958-2335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp173761295-2273-acceptor-0@2a4ddf20-ServerConnector@39269edd{HTTP/1.1, (http/1.1)}{0.0.0.0:34805} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x51111ef7-SendThread(127.0.0.1:53758) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644-prefix:jenkins-hbase20.apache.org,40919,1689354845821 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x35862fbf sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/849091510.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=45741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x51111ef7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 3 on default port 41897 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 3 on default port 40387 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost.localdomain/41959.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Server handler 1 on default port 41959 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0xd620467-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=45741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@4b81d267 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1837202295@qtp-1512793685-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39817 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp370907958-2334 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x3ee45dbe-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x696f1d33-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_187058629_17 at /127.0.0.1:48212 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1187110071-2344 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1647920887.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 43505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x696f1d33-SendThread(127.0.0.1:53758) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (943951222) connection to localhost.localdomain/127.0.0.1:38043 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x39083b67-SendThread(127.0.0.1:53758) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x04a53f68 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/849091510.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp370907958-2332 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1647920887.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1759521972-2247 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1049063973-2611 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xd620467-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1664951409@qtp-757478613-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1981818142_17 at /127.0.0.1:52672 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp173761295-2276 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost.localdomain:43505 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x59afc043 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/849091510.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: jenkins-hbase20:35249Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1966826065@qtp-1911464360-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ProcessThread(sid:0 cport:53758): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: IPC Server handler 2 on default port 40387 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (943951222) connection to localhost.localdomain/127.0.0.1:43505 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data4/current/BP-147832095-148.251.75.209-1689354845074 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x59afc043-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (943951222) connection to localhost.localdomain/127.0.0.1:43505 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/41959-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost.localdomain/41959-SendThread(127.0.0.1:53758) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1759521972-2248 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1187110071-2349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 41897 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-570-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@46b34492[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1180047441-2306 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data5/current/BP-147832095-148.251.75.209-1689354845074 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (943951222) connection to localhost.localdomain/127.0.0.1:43505 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x39083b67 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/849091510.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:38043 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost.localdomain:43505 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 4 on default port 43505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1180047441-2304 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/41959-SendThread(127.0.0.1:53758) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 2 on default port 43505 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=45741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1981818142_17 at /127.0.0.1:60282 [Receiving block BP-147832095-148.251.75.209-1689354845074:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-147832095-148.251.75.209-1689354845074:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53758@0x51111ef7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/849091510.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase20:43799-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xd620467-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644-prefix:jenkins-hbase20.apache.org,45741,1689354845740.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:53758 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@4c284ea7 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xd620467-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost.localdomain:38043 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 41897 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) - Thread LEAK? -, OpenFileDescriptor=830 (was 825) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=521 (was 543), ProcessCount=172 (was 172), AvailableMemoryMB=3144 (was 3624) 2023-07-14 17:14:07,388 WARN [Listener at localhost.localdomain/41959] hbase.ResourceChecker(130): Thread=559 is superior to 500 2023-07-14 17:14:07,410 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45065,DS-4215712d-f263-40b9-9298-85201cfd7727,DISK] 2023-07-14 17:14:07,415 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45875,DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d,DISK] 2023-07-14 17:14:07,415 INFO [Listener at localhost.localdomain/41959] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=559, OpenFileDescriptor=832, MaxFileDescriptor=60000, SystemLoadAverage=521, ProcessCount=173, AvailableMemoryMB=3139 2023-07-14 17:14:07,415 WARN [Listener at localhost.localdomain/41959] hbase.ResourceChecker(130): Thread=559 is superior to 500 2023-07-14 17:14:07,416 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-14 17:14:07,416 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43591,DS-36b58b11-1385-4fc8-b76b-4adb69764848,DISK] 2023-07-14 17:14:07,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:07,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:07,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:14:07,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:14:07,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:14:07,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:14:07,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:14:07,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:14:07,424 INFO [RS:3;jenkins-hbase20:35249] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,35249,1689354847148/jenkins-hbase20.apache.org%2C35249%2C1689354847148.1689354847374 2023-07-14 17:14:07,425 DEBUG [RS:3;jenkins-hbase20:35249] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45065,DS-4215712d-f263-40b9-9298-85201cfd7727,DISK], DatanodeInfoWithStorage[127.0.0.1:43591,DS-36b58b11-1385-4fc8-b76b-4adb69764848,DISK], DatanodeInfoWithStorage[127.0.0.1:45875,DS-f8e1d24f-ac3d-4f8c-ad58-69c0fe03bd9d,DISK]] 2023-07-14 17:14:07,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:07,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:14:07,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:14:07,431 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:14:07,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:14:07,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:07,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:07,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:14:07,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:14:07,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:07,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:07,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:39335] to rsgroup master 2023-07-14 17:14:07,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:14:07,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:38230 deadline: 1689356047449, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. 2023-07-14 17:14:07,450 WARN [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:14:07,452 INFO [Listener at localhost.localdomain/41959] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:07,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:07,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:07,454 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35249, jenkins-hbase20.apache.org:40919, jenkins-hbase20.apache.org:43799, jenkins-hbase20.apache.org:45741], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:14:07,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:14:07,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:14:07,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:14:07,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-14 17:14:07,460 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:14:07,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-14 17:14:07,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-14 17:14:07,462 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:07,462 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:07,463 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:14:07,465 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 17:14:07,466 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033 2023-07-14 17:14:07,467 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033 empty. 2023-07-14 17:14:07,467 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033 2023-07-14 17:14:07,468 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-14 17:14:07,482 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-14 17:14:07,483 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => d4406ab2874a10b7c3bd9d99b5e15033, NAME => 't1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp 2023-07-14 17:14:07,492 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:07,492 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing d4406ab2874a10b7c3bd9d99b5e15033, disabling compactions & flushes 2023-07-14 17:14:07,492 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033. 2023-07-14 17:14:07,492 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033. 2023-07-14 17:14:07,492 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033. after waiting 0 ms 2023-07-14 17:14:07,492 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033. 2023-07-14 17:14:07,492 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033. 2023-07-14 17:14:07,492 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for d4406ab2874a10b7c3bd9d99b5e15033: 2023-07-14 17:14:07,494 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 17:14:07,495 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689354847495"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354847495"}]},"ts":"1689354847495"} 2023-07-14 17:14:07,496 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 17:14:07,497 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 17:14:07,497 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354847497"}]},"ts":"1689354847497"} 2023-07-14 17:14:07,498 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-14 17:14:07,500 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-14 17:14:07,500 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 17:14:07,500 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 17:14:07,500 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 17:14:07,500 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-14 17:14:07,500 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 17:14:07,500 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=d4406ab2874a10b7c3bd9d99b5e15033, ASSIGN}] 2023-07-14 17:14:07,501 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=d4406ab2874a10b7c3bd9d99b5e15033, ASSIGN 2023-07-14 17:14:07,502 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=d4406ab2874a10b7c3bd9d99b5e15033, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,35249,1689354847148; forceNewPlan=false, retain=false 2023-07-14 17:14:07,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-14 17:14:07,652 INFO [jenkins-hbase20:39335] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 17:14:07,654 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=d4406ab2874a10b7c3bd9d99b5e15033, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:07,654 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689354847654"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354847654"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354847654"}]},"ts":"1689354847654"} 2023-07-14 17:14:07,656 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure d4406ab2874a10b7c3bd9d99b5e15033, server=jenkins-hbase20.apache.org,35249,1689354847148}] 2023-07-14 17:14:07,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-14 17:14:07,810 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:07,811 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 17:14:07,812 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54506, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 17:14:07,816 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033. 2023-07-14 17:14:07,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d4406ab2874a10b7c3bd9d99b5e15033, NAME => 't1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033.', STARTKEY => '', ENDKEY => ''} 2023-07-14 17:14:07,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 d4406ab2874a10b7c3bd9d99b5e15033 2023-07-14 17:14:07,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 17:14:07,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for d4406ab2874a10b7c3bd9d99b5e15033 2023-07-14 17:14:07,816 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for d4406ab2874a10b7c3bd9d99b5e15033 2023-07-14 17:14:07,817 INFO [StoreOpener-d4406ab2874a10b7c3bd9d99b5e15033-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region d4406ab2874a10b7c3bd9d99b5e15033 2023-07-14 17:14:07,819 DEBUG [StoreOpener-d4406ab2874a10b7c3bd9d99b5e15033-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033/cf1 2023-07-14 17:14:07,819 DEBUG [StoreOpener-d4406ab2874a10b7c3bd9d99b5e15033-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033/cf1 2023-07-14 17:14:07,820 INFO [StoreOpener-d4406ab2874a10b7c3bd9d99b5e15033-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d4406ab2874a10b7c3bd9d99b5e15033 columnFamilyName cf1 2023-07-14 17:14:07,820 INFO [StoreOpener-d4406ab2874a10b7c3bd9d99b5e15033-1] regionserver.HStore(310): Store=d4406ab2874a10b7c3bd9d99b5e15033/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 17:14:07,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033 2023-07-14 17:14:07,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033 2023-07-14 17:14:07,824 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for d4406ab2874a10b7c3bd9d99b5e15033 2023-07-14 17:14:07,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 17:14:07,826 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened d4406ab2874a10b7c3bd9d99b5e15033; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9970933920, jitterRate=-0.07138441503047943}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 17:14:07,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for d4406ab2874a10b7c3bd9d99b5e15033: 2023-07-14 17:14:07,827 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033., pid=14, masterSystemTime=1689354847810 2023-07-14 17:14:07,831 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033. 2023-07-14 17:14:07,832 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033. 2023-07-14 17:14:07,832 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=d4406ab2874a10b7c3bd9d99b5e15033, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:07,833 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689354847832"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689354847832"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689354847832"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689354847832"}]},"ts":"1689354847832"} 2023-07-14 17:14:07,835 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-14 17:14:07,836 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure d4406ab2874a10b7c3bd9d99b5e15033, server=jenkins-hbase20.apache.org,35249,1689354847148 in 178 msec 2023-07-14 17:14:07,837 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-14 17:14:07,837 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=d4406ab2874a10b7c3bd9d99b5e15033, ASSIGN in 335 msec 2023-07-14 17:14:07,837 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 17:14:07,838 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354847838"}]},"ts":"1689354847838"} 2023-07-14 17:14:07,839 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-14 17:14:07,841 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 17:14:07,842 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 385 msec 2023-07-14 17:14:08,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-14 17:14:08,064 INFO [Listener at localhost.localdomain/41959] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-14 17:14:08,064 DEBUG [Listener at localhost.localdomain/41959] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-14 17:14:08,065 INFO [Listener at localhost.localdomain/41959] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:08,067 INFO [Listener at localhost.localdomain/41959] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-14 17:14:08,067 INFO [Listener at localhost.localdomain/41959] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:08,067 INFO [Listener at localhost.localdomain/41959] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-14 17:14:08,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 17:14:08,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-14 17:14:08,071 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 17:14:08,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-14 17:14:08,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 352 connection: 148.251.75.209:38230 deadline: 1689354908069, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-14 17:14:08,074 INFO [Listener at localhost.localdomain/41959] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:08,075 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=5 msec 2023-07-14 17:14:08,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:14:08,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:14:08,176 INFO [Listener at localhost.localdomain/41959] client.HBaseAdmin$15(890): Started disable of t1 2023-07-14 17:14:08,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable t1 2023-07-14 17:14:08,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-14 17:14:08,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-14 17:14:08,181 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354848180"}]},"ts":"1689354848180"} 2023-07-14 17:14:08,182 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-14 17:14:08,184 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-14 17:14:08,186 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=d4406ab2874a10b7c3bd9d99b5e15033, UNASSIGN}] 2023-07-14 17:14:08,187 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=d4406ab2874a10b7c3bd9d99b5e15033, UNASSIGN 2023-07-14 17:14:08,197 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=d4406ab2874a10b7c3bd9d99b5e15033, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:08,197 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689354848197"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689354848197"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689354848197"}]},"ts":"1689354848197"} 2023-07-14 17:14:08,202 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure d4406ab2874a10b7c3bd9d99b5e15033, server=jenkins-hbase20.apache.org,35249,1689354847148}] 2023-07-14 17:14:08,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-14 17:14:08,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close d4406ab2874a10b7c3bd9d99b5e15033 2023-07-14 17:14:08,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing d4406ab2874a10b7c3bd9d99b5e15033, disabling compactions & flushes 2023-07-14 17:14:08,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033. 2023-07-14 17:14:08,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033. 2023-07-14 17:14:08,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033. after waiting 0 ms 2023-07-14 17:14:08,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033. 2023-07-14 17:14:08,360 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 17:14:08,361 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033. 2023-07-14 17:14:08,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for d4406ab2874a10b7c3bd9d99b5e15033: 2023-07-14 17:14:08,372 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=d4406ab2874a10b7c3bd9d99b5e15033, regionState=CLOSED 2023-07-14 17:14:08,373 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689354848372"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689354848372"}]},"ts":"1689354848372"} 2023-07-14 17:14:08,377 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-14 17:14:08,377 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure d4406ab2874a10b7c3bd9d99b5e15033, server=jenkins-hbase20.apache.org,35249,1689354847148 in 172 msec 2023-07-14 17:14:08,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed d4406ab2874a10b7c3bd9d99b5e15033 2023-07-14 17:14:08,382 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-14 17:14:08,382 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=d4406ab2874a10b7c3bd9d99b5e15033, UNASSIGN in 191 msec 2023-07-14 17:14:08,383 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689354848383"}]},"ts":"1689354848383"} 2023-07-14 17:14:08,391 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-14 17:14:08,392 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-14 17:14:08,398 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 217 msec 2023-07-14 17:14:08,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-14 17:14:08,488 INFO [Listener at localhost.localdomain/41959] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-14 17:14:08,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete t1 2023-07-14 17:14:08,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-14 17:14:08,492 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-14 17:14:08,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-14 17:14:08,493 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-14 17:14:08,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:08,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:14:08,497 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033 2023-07-14 17:14:08,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-14 17:14:08,499 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033/cf1, FileablePath, hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033/recovered.edits] 2023-07-14 17:14:08,509 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033/recovered.edits/4.seqid to hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/archive/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033/recovered.edits/4.seqid 2023-07-14 17:14:08,509 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/.tmp/data/default/t1/d4406ab2874a10b7c3bd9d99b5e15033 2023-07-14 17:14:08,509 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-14 17:14:08,513 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-14 17:14:08,520 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-14 17:14:08,524 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-14 17:14:08,525 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-14 17:14:08,525 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-14 17:14:08,526 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689354848525"}]},"ts":"9223372036854775807"} 2023-07-14 17:14:08,528 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-14 17:14:08,528 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => d4406ab2874a10b7c3bd9d99b5e15033, NAME => 't1,,1689354847456.d4406ab2874a10b7c3bd9d99b5e15033.', STARTKEY => '', ENDKEY => ''}] 2023-07-14 17:14:08,528 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-14 17:14:08,528 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689354848528"}]},"ts":"9223372036854775807"} 2023-07-14 17:14:08,531 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-14 17:14:08,534 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-14 17:14:08,535 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 45 msec 2023-07-14 17:14:08,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-14 17:14:08,600 INFO [Listener at localhost.localdomain/41959] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-14 17:14:08,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:14:08,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:14:08,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:14:08,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:14:08,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:14:08,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:14:08,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:14:08,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:14:08,613 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:14:08,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:14:08,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:08,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:14:08,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:14:08,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:39335] to rsgroup master 2023-07-14 17:14:08,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:14:08,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:38230 deadline: 1689356048633, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. 2023-07-14 17:14:08,634 WARN [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:14:08,637 INFO [Listener at localhost.localdomain/41959] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:08,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,638 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35249, jenkins-hbase20.apache.org:40919, jenkins-hbase20.apache.org:43799, jenkins-hbase20.apache.org:45741], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:14:08,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:14:08,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:14:08,659 INFO [Listener at localhost.localdomain/41959] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=567 (was 559) - Thread LEAK? -, OpenFileDescriptor=833 (was 832) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=521 (was 521), ProcessCount=175 (was 173) - ProcessCount LEAK? -, AvailableMemoryMB=3004 (was 3139) 2023-07-14 17:14:08,659 WARN [Listener at localhost.localdomain/41959] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-14 17:14:08,680 INFO [Listener at localhost.localdomain/41959] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=567, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=521, ProcessCount=175, AvailableMemoryMB=3000 2023-07-14 17:14:08,681 WARN [Listener at localhost.localdomain/41959] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-14 17:14:08,681 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-14 17:14:08,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:14:08,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:14:08,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:14:08,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:14:08,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:14:08,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:14:08,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:14:08,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:14:08,696 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:14:08,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:14:08,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:08,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:14:08,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:14:08,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:39335] to rsgroup master 2023-07-14 17:14:08,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:14:08,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:38230 deadline: 1689356048712, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. 2023-07-14 17:14:08,712 WARN [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:14:08,714 INFO [Listener at localhost.localdomain/41959] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:08,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,716 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35249, jenkins-hbase20.apache.org:40919, jenkins-hbase20.apache.org:43799, jenkins-hbase20.apache.org:45741], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:14:08,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:14:08,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:14:08,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-14 17:14:08,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:14:08,720 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-14 17:14:08,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-14 17:14:08,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 17:14:08,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:14:08,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:14:08,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:14:08,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:14:08,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:14:08,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:14:08,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:14:08,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:14:08,744 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:14:08,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:14:08,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:08,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:14:08,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:14:08,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:39335] to rsgroup master 2023-07-14 17:14:08,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:14:08,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:38230 deadline: 1689356048763, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. 2023-07-14 17:14:08,764 WARN [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:14:08,766 INFO [Listener at localhost.localdomain/41959] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:08,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,768 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35249, jenkins-hbase20.apache.org:40919, jenkins-hbase20.apache.org:43799, jenkins-hbase20.apache.org:45741], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:14:08,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:14:08,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:14:08,789 INFO [Listener at localhost.localdomain/41959] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=569 (was 567) - Thread LEAK? -, OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=521 (was 521), ProcessCount=175 (was 175), AvailableMemoryMB=2986 (was 3000) 2023-07-14 17:14:08,789 WARN [Listener at localhost.localdomain/41959] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-14 17:14:08,806 INFO [Listener at localhost.localdomain/41959] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=569, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=521, ProcessCount=175, AvailableMemoryMB=2984 2023-07-14 17:14:08,806 WARN [Listener at localhost.localdomain/41959] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-14 17:14:08,806 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-14 17:14:08,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:14:08,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:14:08,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:14:08,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:14:08,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:14:08,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:14:08,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:14:08,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:14:08,817 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:14:08,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:14:08,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:08,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:14:08,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:14:08,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:39335] to rsgroup master 2023-07-14 17:14:08,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:14:08,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:38230 deadline: 1689356048828, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. 2023-07-14 17:14:08,829 WARN [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:14:08,831 INFO [Listener at localhost.localdomain/41959] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:08,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,832 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35249, jenkins-hbase20.apache.org:40919, jenkins-hbase20.apache.org:43799, jenkins-hbase20.apache.org:45741], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:14:08,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:14:08,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:14:08,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:14:08,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:14:08,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:14:08,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:14:08,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:14:08,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:14:08,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:14:08,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:14:08,845 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:14:08,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:14:08,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:08,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:14:08,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:14:08,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:39335] to rsgroup master 2023-07-14 17:14:08,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:14:08,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:38230 deadline: 1689356048854, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. 2023-07-14 17:14:08,855 WARN [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:14:08,856 INFO [Listener at localhost.localdomain/41959] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:08,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,858 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35249, jenkins-hbase20.apache.org:40919, jenkins-hbase20.apache.org:43799, jenkins-hbase20.apache.org:45741], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:14:08,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:14:08,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:14:08,874 INFO [Listener at localhost.localdomain/41959] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=570 (was 569) - Thread LEAK? -, OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=521 (was 521), ProcessCount=175 (was 175), AvailableMemoryMB=2972 (was 2984) 2023-07-14 17:14:08,874 WARN [Listener at localhost.localdomain/41959] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-14 17:14:08,891 INFO [Listener at localhost.localdomain/41959] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=570, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=521, ProcessCount=175, AvailableMemoryMB=2967 2023-07-14 17:14:08,892 WARN [Listener at localhost.localdomain/41959] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-14 17:14:08,892 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-14 17:14:08,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:14:08,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:14:08,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:14:08,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:14:08,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:14:08,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:14:08,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:14:08,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:14:08,904 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:14:08,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:14:08,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:08,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:14:08,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:14:08,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:39335] to rsgroup master 2023-07-14 17:14:08,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:14:08,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:38230 deadline: 1689356048912, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. 2023-07-14 17:14:08,913 WARN [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:14:08,914 INFO [Listener at localhost.localdomain/41959] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:08,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,915 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35249, jenkins-hbase20.apache.org:40919, jenkins-hbase20.apache.org:43799, jenkins-hbase20.apache.org:45741], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:14:08,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:14:08,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:14:08,916 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-14 17:14:08,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_foo 2023-07-14 17:14:08,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-14 17:14:08,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:08,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:08,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 17:14:08,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:14:08,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:08,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:08,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.HMaster$15(3014): Client=jenkins//148.251.75.209 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-14 17:14:08,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-14 17:14:08,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-14 17:14:08,937 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 17:14:08,939 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 12 msec 2023-07-14 17:14:09,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-14 17:14:09,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_foo 2023-07-14 17:14:09,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:14:09,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 148.251.75.209:38230 deadline: 1689356049033, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-14 17:14:09,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.HMaster$16(3053): Client=jenkins//148.251.75.209 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-14 17:14:09,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-14 17:14:09,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-14 17:14:09,053 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-14 17:14:09,054 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-14 17:14:09,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-14 17:14:09,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_anotherGroup 2023-07-14 17:14:09,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-14 17:14:09,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:09,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-14 17:14:09,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:09,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 17:14:09,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:14:09,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:09,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:09,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.HMaster$17(3086): Client=jenkins//148.251.75.209 delete Group_foo 2023-07-14 17:14:09,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-14 17:14:09,171 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-14 17:14:09,174 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-14 17:14:09,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-14 17:14:09,175 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-14 17:14:09,176 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-14 17:14:09,176 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 17:14:09,176 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-14 17:14:09,178 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-14 17:14:09,179 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 12 msec 2023-07-14 17:14:09,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-14 17:14:09,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_foo 2023-07-14 17:14:09,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-14 17:14:09,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:09,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:09,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-14 17:14:09,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:14:09,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:09,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:09,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:14:09,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 148.251.75.209:38230 deadline: 1689354909291, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-14 17:14:09,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:09,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:09,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:14:09,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:14:09,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:14:09,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:14:09,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:14:09,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_anotherGroup 2023-07-14 17:14:09,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:09,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:09,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 17:14:09,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:14:09,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-14 17:14:09,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 17:14:09,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-14 17:14:09,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-14 17:14:09,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-14 17:14:09,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-14 17:14:09,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:09,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 17:14:09,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 17:14:09,327 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 17:14:09,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-14 17:14:09,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 17:14:09,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 17:14:09,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 17:14:09,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 17:14:09,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:09,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:09,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:39335] to rsgroup master 2023-07-14 17:14:09,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 17:14:09,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:38230 deadline: 1689356049352, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. 2023-07-14 17:14:09,353 WARN [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:39335 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 17:14:09,355 INFO [Listener at localhost.localdomain/41959] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 17:14:09,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-14 17:14:09,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 17:14:09,357 INFO [Listener at localhost.localdomain/41959] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:35249, jenkins-hbase20.apache.org:40919, jenkins-hbase20.apache.org:43799, jenkins-hbase20.apache.org:45741], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 17:14:09,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-14 17:14:09,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39335] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 17:14:09,389 INFO [Listener at localhost.localdomain/41959] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=570 (was 570), OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=521 (was 521), ProcessCount=175 (was 175), AvailableMemoryMB=2901 (was 2967) 2023-07-14 17:14:09,389 WARN [Listener at localhost.localdomain/41959] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-14 17:14:09,389 INFO [Listener at localhost.localdomain/41959] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-14 17:14:09,389 INFO [Listener at localhost.localdomain/41959] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-14 17:14:09,389 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x04a53f68 to 127.0.0.1:53758 2023-07-14 17:14:09,390 DEBUG [Listener at localhost.localdomain/41959] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:09,390 DEBUG [Listener at localhost.localdomain/41959] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-14 17:14:09,390 DEBUG [Listener at localhost.localdomain/41959] util.JVMClusterUtil(257): Found active master hash=1320671413, stopped=false 2023-07-14 17:14:09,390 DEBUG [Listener at localhost.localdomain/41959] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-14 17:14:09,390 DEBUG [Listener at localhost.localdomain/41959] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-14 17:14:09,390 INFO [Listener at localhost.localdomain/41959] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,39335,1689354845688 2023-07-14 17:14:09,391 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:09,391 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:09,391 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:09,391 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:09,391 INFO [Listener at localhost.localdomain/41959] procedure2.ProcedureExecutor(629): Stopping 2023-07-14 17:14:09,391 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:09,391 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 17:14:09,392 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:09,392 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:09,392 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:09,392 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:09,392 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 17:14:09,392 DEBUG [Listener at localhost.localdomain/41959] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x39083b67 to 127.0.0.1:53758 2023-07-14 17:14:09,392 DEBUG [Listener at localhost.localdomain/41959] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:09,392 INFO [Listener at localhost.localdomain/41959] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,45741,1689354845740' ***** 2023-07-14 17:14:09,392 INFO [Listener at localhost.localdomain/41959] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 17:14:09,392 INFO [Listener at localhost.localdomain/41959] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,43799,1689354845783' ***** 2023-07-14 17:14:09,392 INFO [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:14:09,392 INFO [Listener at localhost.localdomain/41959] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 17:14:09,402 INFO [Listener at localhost.localdomain/41959] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,40919,1689354845821' ***** 2023-07-14 17:14:09,403 INFO [Listener at localhost.localdomain/41959] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 17:14:09,403 INFO [RS:0;jenkins-hbase20:45741] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3a8b1fd4{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:09,403 INFO [RS:2;jenkins-hbase20:40919] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:14:09,403 INFO [Listener at localhost.localdomain/41959] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,35249,1689354847148' ***** 2023-07-14 17:14:09,403 INFO [Listener at localhost.localdomain/41959] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 17:14:09,403 INFO [RS:3;jenkins-hbase20:35249] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:14:09,403 INFO [RS:1;jenkins-hbase20:43799] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:14:09,405 INFO [RS:0;jenkins-hbase20:45741] server.AbstractConnector(383): Stopped ServerConnector@39269edd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:14:09,405 INFO [RS:0;jenkins-hbase20:45741] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:14:09,407 INFO [RS:0;jenkins-hbase20:45741] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@31ed1238{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:14:09,407 INFO [RS:2;jenkins-hbase20:40919] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@679a327e{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:09,410 INFO [RS:3;jenkins-hbase20:35249] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@396b9af3{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:09,410 INFO [RS:3;jenkins-hbase20:35249] server.AbstractConnector(383): Stopped ServerConnector@37afaada{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:14:09,410 INFO [RS:1;jenkins-hbase20:43799] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2006aff9{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-14 17:14:09,410 INFO [RS:3;jenkins-hbase20:35249] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:14:09,411 INFO [RS:2;jenkins-hbase20:40919] server.AbstractConnector(383): Stopped ServerConnector@65cd1930{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:14:09,411 INFO [RS:2;jenkins-hbase20:40919] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:14:09,413 INFO [RS:0;jenkins-hbase20:45741] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@696a04fe{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/hadoop.log.dir/,STOPPED} 2023-07-14 17:14:09,413 INFO [RS:3;jenkins-hbase20:35249] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7b8742d0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:14:09,415 INFO [RS:3;jenkins-hbase20:35249] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ee67f7f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/hadoop.log.dir/,STOPPED} 2023-07-14 17:14:09,415 INFO [RS:1;jenkins-hbase20:43799] server.AbstractConnector(383): Stopped ServerConnector@4d9fdcd9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:14:09,415 INFO [RS:1;jenkins-hbase20:43799] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:14:09,415 INFO [RS:2;jenkins-hbase20:40919] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3e257ad7{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:14:09,416 INFO [RS:1;jenkins-hbase20:43799] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5428a3f0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:14:09,415 INFO [RS:0;jenkins-hbase20:45741] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 17:14:09,417 INFO [RS:3;jenkins-hbase20:35249] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 17:14:09,418 INFO [RS:1;jenkins-hbase20:43799] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@46751bf1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/hadoop.log.dir/,STOPPED} 2023-07-14 17:14:09,418 INFO [RS:2;jenkins-hbase20:40919] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4d67d082{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/hadoop.log.dir/,STOPPED} 2023-07-14 17:14:09,417 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 17:14:09,417 INFO [RS:0;jenkins-hbase20:45741] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 17:14:09,418 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 17:14:09,419 INFO [RS:1;jenkins-hbase20:43799] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 17:14:09,419 INFO [RS:1;jenkins-hbase20:43799] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 17:14:09,419 INFO [RS:1;jenkins-hbase20:43799] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 17:14:09,419 INFO [RS:1;jenkins-hbase20:43799] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:09,419 DEBUG [RS:1;jenkins-hbase20:43799] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7119fb69 to 127.0.0.1:53758 2023-07-14 17:14:09,419 DEBUG [RS:1;jenkins-hbase20:43799] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:09,420 INFO [RS:1;jenkins-hbase20:43799] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,43799,1689354845783; all regions closed. 2023-07-14 17:14:09,418 INFO [RS:3;jenkins-hbase20:35249] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 17:14:09,419 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 17:14:09,419 INFO [RS:0;jenkins-hbase20:45741] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 17:14:09,421 INFO [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(3305): Received CLOSE for fd011a1ad1184dc66ae1fa5bcb4bbecd 2023-07-14 17:14:09,421 INFO [RS:3;jenkins-hbase20:35249] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 17:14:09,421 INFO [RS:3;jenkins-hbase20:35249] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:09,421 DEBUG [RS:3;jenkins-hbase20:35249] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x51111ef7 to 127.0.0.1:53758 2023-07-14 17:14:09,421 DEBUG [RS:3;jenkins-hbase20:35249] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:09,421 INFO [RS:3;jenkins-hbase20:35249] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,35249,1689354847148; all regions closed. 2023-07-14 17:14:09,438 INFO [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(3305): Received CLOSE for c02f06fa368297001184995300a52985 2023-07-14 17:14:09,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing fd011a1ad1184dc66ae1fa5bcb4bbecd, disabling compactions & flushes 2023-07-14 17:14:09,439 INFO [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:09,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. 2023-07-14 17:14:09,439 DEBUG [RS:0;jenkins-hbase20:45741] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x35862fbf to 127.0.0.1:53758 2023-07-14 17:14:09,439 DEBUG [RS:0;jenkins-hbase20:45741] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:09,439 INFO [RS:0;jenkins-hbase20:45741] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 17:14:09,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. 2023-07-14 17:14:09,439 INFO [RS:0;jenkins-hbase20:45741] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 17:14:09,439 INFO [RS:0;jenkins-hbase20:45741] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 17:14:09,440 INFO [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-14 17:14:09,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. after waiting 0 ms 2023-07-14 17:14:09,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. 2023-07-14 17:14:09,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing fd011a1ad1184dc66ae1fa5bcb4bbecd 1/1 column families, dataSize=6.53 KB heapSize=10.82 KB 2023-07-14 17:14:09,448 INFO [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-14 17:14:09,448 DEBUG [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1478): Online Regions={fd011a1ad1184dc66ae1fa5bcb4bbecd=hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd., 1588230740=hbase:meta,,1.1588230740, c02f06fa368297001184995300a52985=hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985.} 2023-07-14 17:14:09,448 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 17:14:09,448 DEBUG [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1504): Waiting on 1588230740, c02f06fa368297001184995300a52985, fd011a1ad1184dc66ae1fa5bcb4bbecd 2023-07-14 17:14:09,448 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 17:14:09,448 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 17:14:09,448 INFO [RS:2;jenkins-hbase20:40919] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 17:14:09,449 INFO [RS:2;jenkins-hbase20:40919] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 17:14:09,449 INFO [RS:2;jenkins-hbase20:40919] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 17:14:09,449 INFO [RS:2;jenkins-hbase20:40919] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:09,449 DEBUG [RS:2;jenkins-hbase20:40919] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x696f1d33 to 127.0.0.1:53758 2023-07-14 17:14:09,449 DEBUG [RS:2;jenkins-hbase20:40919] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:09,449 INFO [RS:2;jenkins-hbase20:40919] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,40919,1689354845821; all regions closed. 2023-07-14 17:14:09,449 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 17:14:09,449 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 17:14:09,450 DEBUG [RS:3;jenkins-hbase20:35249] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/oldWALs 2023-07-14 17:14:09,451 INFO [RS:3;jenkins-hbase20:35249] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C35249%2C1689354847148:(num 1689354847374) 2023-07-14 17:14:09,451 DEBUG [RS:3;jenkins-hbase20:35249] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:09,451 INFO [RS:3;jenkins-hbase20:35249] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:09,451 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.82 KB 2023-07-14 17:14:09,450 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:09,449 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 17:14:09,458 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:09,458 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:09,459 INFO [RS:3;jenkins-hbase20:35249] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-14 17:14:09,459 INFO [RS:3;jenkins-hbase20:35249] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 17:14:09,459 INFO [RS:3;jenkins-hbase20:35249] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 17:14:09,458 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:09,459 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:14:09,459 INFO [RS:3;jenkins-hbase20:35249] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 17:14:09,461 INFO [RS:3;jenkins-hbase20:35249] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:35249 2023-07-14 17:14:09,476 DEBUG [RS:1;jenkins-hbase20:43799] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/oldWALs 2023-07-14 17:14:09,476 INFO [RS:1;jenkins-hbase20:43799] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C43799%2C1689354845783:(num 1689354846277) 2023-07-14 17:14:09,476 DEBUG [RS:1;jenkins-hbase20:43799] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:09,476 INFO [RS:1;jenkins-hbase20:43799] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:09,479 INFO [RS:1;jenkins-hbase20:43799] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-14 17:14:09,479 INFO [RS:1;jenkins-hbase20:43799] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 17:14:09,479 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:14:09,479 INFO [RS:1;jenkins-hbase20:43799] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 17:14:09,479 INFO [RS:1;jenkins-hbase20:43799] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 17:14:09,498 INFO [RS:1;jenkins-hbase20:43799] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:43799 2023-07-14 17:14:09,504 DEBUG [RS:2;jenkins-hbase20:40919] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/oldWALs 2023-07-14 17:14:09,504 INFO [RS:2;jenkins-hbase20:40919] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C40919%2C1689354845821:(num 1689354846273) 2023-07-14 17:14:09,504 DEBUG [RS:2;jenkins-hbase20:40919] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:09,504 INFO [RS:2;jenkins-hbase20:40919] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:09,522 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:09,522 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:09,522 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:09,522 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:09,522 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:09,522 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:09,522 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,43799,1689354845783 2023-07-14 17:14:09,522 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:09,522 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:09,522 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:09,522 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:09,522 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,35249,1689354847148 2023-07-14 17:14:09,526 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:09,531 INFO [RS:2;jenkins-hbase20:40919] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-14 17:14:09,531 INFO [RS:2;jenkins-hbase20:40919] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 17:14:09,531 INFO [RS:2;jenkins-hbase20:40919] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 17:14:09,531 INFO [RS:2;jenkins-hbase20:40919] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 17:14:09,531 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:14:09,540 INFO [RS:2;jenkins-hbase20:40919] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:40919 2023-07-14 17:14:09,554 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.53 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/rsgroup/fd011a1ad1184dc66ae1fa5bcb4bbecd/.tmp/m/c0058686a90c488cb8603412cc6f2d51 2023-07-14 17:14:09,557 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/.tmp/info/32498921a46b4edf9a0e9c3b4992d2d5 2023-07-14 17:14:09,565 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c0058686a90c488cb8603412cc6f2d51 2023-07-14 17:14:09,566 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 32498921a46b4edf9a0e9c3b4992d2d5 2023-07-14 17:14:09,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/rsgroup/fd011a1ad1184dc66ae1fa5bcb4bbecd/.tmp/m/c0058686a90c488cb8603412cc6f2d51 as hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/rsgroup/fd011a1ad1184dc66ae1fa5bcb4bbecd/m/c0058686a90c488cb8603412cc6f2d51 2023-07-14 17:14:09,574 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c0058686a90c488cb8603412cc6f2d51 2023-07-14 17:14:09,574 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/rsgroup/fd011a1ad1184dc66ae1fa5bcb4bbecd/m/c0058686a90c488cb8603412cc6f2d51, entries=12, sequenceid=29, filesize=5.5 K 2023-07-14 17:14:09,579 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.53 KB/6685, heapSize ~10.80 KB/11064, currentSize=0 B/0 for fd011a1ad1184dc66ae1fa5bcb4bbecd in 138ms, sequenceid=29, compaction requested=false 2023-07-14 17:14:09,601 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/.tmp/rep_barrier/ce06cd87109f410d992d4386ee0771ae 2023-07-14 17:14:09,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/rsgroup/fd011a1ad1184dc66ae1fa5bcb4bbecd/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-14 17:14:09,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 17:14:09,605 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. 2023-07-14 17:14:09,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for fd011a1ad1184dc66ae1fa5bcb4bbecd: 2023-07-14 17:14:09,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689354846618.fd011a1ad1184dc66ae1fa5bcb4bbecd. 2023-07-14 17:14:09,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing c02f06fa368297001184995300a52985, disabling compactions & flushes 2023-07-14 17:14:09,605 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. 2023-07-14 17:14:09,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. 2023-07-14 17:14:09,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. after waiting 0 ms 2023-07-14 17:14:09,606 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. 2023-07-14 17:14:09,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing c02f06fa368297001184995300a52985 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-14 17:14:09,609 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce06cd87109f410d992d4386ee0771ae 2023-07-14 17:14:09,648 DEBUG [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1504): Waiting on 1588230740, c02f06fa368297001184995300a52985 2023-07-14 17:14:09,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/namespace/c02f06fa368297001184995300a52985/.tmp/info/b68ecf19f1f44145a3fa154a8ef028c8 2023-07-14 17:14:09,654 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/.tmp/table/a5af4673245940af8234c60caf7561f6 2023-07-14 17:14:09,659 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b68ecf19f1f44145a3fa154a8ef028c8 2023-07-14 17:14:09,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/namespace/c02f06fa368297001184995300a52985/.tmp/info/b68ecf19f1f44145a3fa154a8ef028c8 as hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/namespace/c02f06fa368297001184995300a52985/info/b68ecf19f1f44145a3fa154a8ef028c8 2023-07-14 17:14:09,662 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a5af4673245940af8234c60caf7561f6 2023-07-14 17:14:09,663 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/.tmp/info/32498921a46b4edf9a0e9c3b4992d2d5 as hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/info/32498921a46b4edf9a0e9c3b4992d2d5 2023-07-14 17:14:09,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b68ecf19f1f44145a3fa154a8ef028c8 2023-07-14 17:14:09,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/namespace/c02f06fa368297001184995300a52985/info/b68ecf19f1f44145a3fa154a8ef028c8, entries=3, sequenceid=9, filesize=5.0 K 2023-07-14 17:14:09,674 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 32498921a46b4edf9a0e9c3b4992d2d5 2023-07-14 17:14:09,675 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/info/32498921a46b4edf9a0e9c3b4992d2d5, entries=22, sequenceid=26, filesize=7.3 K 2023-07-14 17:14:09,675 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for c02f06fa368297001184995300a52985 in 69ms, sequenceid=9, compaction requested=false 2023-07-14 17:14:09,676 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/.tmp/rep_barrier/ce06cd87109f410d992d4386ee0771ae as hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/rep_barrier/ce06cd87109f410d992d4386ee0771ae 2023-07-14 17:14:09,688 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce06cd87109f410d992d4386ee0771ae 2023-07-14 17:14:09,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/namespace/c02f06fa368297001184995300a52985/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-14 17:14:09,688 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/rep_barrier/ce06cd87109f410d992d4386ee0771ae, entries=1, sequenceid=26, filesize=4.9 K 2023-07-14 17:14:09,689 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. 2023-07-14 17:14:09,689 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for c02f06fa368297001184995300a52985: 2023-07-14 17:14:09,689 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689354846535.c02f06fa368297001184995300a52985. 2023-07-14 17:14:09,690 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/.tmp/table/a5af4673245940af8234c60caf7561f6 as hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/table/a5af4673245940af8234c60caf7561f6 2023-07-14 17:14:09,697 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a5af4673245940af8234c60caf7561f6 2023-07-14 17:14:09,697 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/table/a5af4673245940af8234c60caf7561f6, entries=6, sequenceid=26, filesize=5.1 K 2023-07-14 17:14:09,698 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4621, heapSize ~8.77 KB/8984, currentSize=0 B/0 for 1588230740 in 247ms, sequenceid=26, compaction requested=false 2023-07-14 17:14:09,718 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-14 17:14:09,719 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 17:14:09,719 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 17:14:09,719 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 17:14:09,720 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-14 17:14:09,727 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:09,727 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,40919,1689354845821 2023-07-14 17:14:09,727 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,40919,1689354845821] 2023-07-14 17:14:09,727 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,40919,1689354845821; numProcessing=1 2023-07-14 17:14:09,788 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:09,788 INFO [RS:1;jenkins-hbase20:43799] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,43799,1689354845783; zookeeper connection closed. 2023-07-14 17:14:09,788 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:43799-0x1008c79ba620002, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:09,788 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@37a2b13e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@37a2b13e 2023-07-14 17:14:09,789 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,40919,1689354845821 already deleted, retry=false 2023-07-14 17:14:09,789 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,40919,1689354845821 expired; onlineServers=3 2023-07-14 17:14:09,789 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,43799,1689354845783] 2023-07-14 17:14:09,789 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,43799,1689354845783; numProcessing=2 2023-07-14 17:14:09,790 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,43799,1689354845783 already deleted, retry=false 2023-07-14 17:14:09,790 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,43799,1689354845783 expired; onlineServers=2 2023-07-14 17:14:09,790 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,35249,1689354847148] 2023-07-14 17:14:09,790 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,35249,1689354847148; numProcessing=3 2023-07-14 17:14:09,791 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,35249,1689354847148 already deleted, retry=false 2023-07-14 17:14:09,791 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,35249,1689354847148 expired; onlineServers=1 2023-07-14 17:14:09,795 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:09,795 INFO [RS:3;jenkins-hbase20:35249] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,35249,1689354847148; zookeeper connection closed. 2023-07-14 17:14:09,795 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:35249-0x1008c79ba62000b, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:09,795 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@78ee26f4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@78ee26f4 2023-07-14 17:14:09,849 INFO [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,45741,1689354845740; all regions closed. 2023-07-14 17:14:09,853 DEBUG [RS:0;jenkins-hbase20:45741] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/oldWALs 2023-07-14 17:14:09,853 INFO [RS:0;jenkins-hbase20:45741] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C45741%2C1689354845740.meta:.meta(num 1689354846451) 2023-07-14 17:14:09,861 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/WALs/jenkins-hbase20.apache.org,45741,1689354845740/jenkins-hbase20.apache.org%2C45741%2C1689354845740.1689354846278 not finished, retry = 0 2023-07-14 17:14:09,967 DEBUG [RS:0;jenkins-hbase20:45741] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/oldWALs 2023-07-14 17:14:09,967 INFO [RS:0;jenkins-hbase20:45741] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C45741%2C1689354845740:(num 1689354846278) 2023-07-14 17:14:09,967 DEBUG [RS:0;jenkins-hbase20:45741] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:09,967 INFO [RS:0;jenkins-hbase20:45741] regionserver.LeaseManager(133): Closed leases 2023-07-14 17:14:09,968 INFO [RS:0;jenkins-hbase20:45741] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-14 17:14:09,968 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:14:09,969 INFO [RS:0;jenkins-hbase20:45741] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:45741 2023-07-14 17:14:09,971 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,45741,1689354845740 2023-07-14 17:14:09,971 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 17:14:09,971 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,45741,1689354845740] 2023-07-14 17:14:09,971 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,45741,1689354845740; numProcessing=4 2023-07-14 17:14:09,972 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,45741,1689354845740 already deleted, retry=false 2023-07-14 17:14:09,972 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,45741,1689354845740 expired; onlineServers=0 2023-07-14 17:14:09,972 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,39335,1689354845688' ***** 2023-07-14 17:14:09,972 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-14 17:14:09,973 DEBUG [M:0;jenkins-hbase20:39335] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a8855bb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-14 17:14:09,973 INFO [M:0;jenkins-hbase20:39335] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 17:14:09,976 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-14 17:14:09,976 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 17:14:09,976 INFO [M:0;jenkins-hbase20:39335] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@372bd8f5{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-14 17:14:09,977 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 17:14:09,977 INFO [M:0;jenkins-hbase20:39335] server.AbstractConnector(383): Stopped ServerConnector@392abe7f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:14:09,977 INFO [M:0;jenkins-hbase20:39335] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 17:14:09,978 INFO [M:0;jenkins-hbase20:39335] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2bcb7f4e{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-14 17:14:09,979 INFO [M:0;jenkins-hbase20:39335] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6103c650{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/hadoop.log.dir/,STOPPED} 2023-07-14 17:14:09,986 INFO [M:0;jenkins-hbase20:39335] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,39335,1689354845688 2023-07-14 17:14:09,987 INFO [M:0;jenkins-hbase20:39335] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,39335,1689354845688; all regions closed. 2023-07-14 17:14:09,987 DEBUG [M:0;jenkins-hbase20:39335] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 17:14:09,987 INFO [M:0;jenkins-hbase20:39335] master.HMaster(1491): Stopping master jetty server 2023-07-14 17:14:09,987 INFO [M:0;jenkins-hbase20:39335] server.AbstractConnector(383): Stopped ServerConnector@48062a1f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 17:14:09,988 DEBUG [M:0;jenkins-hbase20:39335] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-14 17:14:09,988 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-14 17:14:09,988 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689354846030] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689354846030,5,FailOnTimeoutGroup] 2023-07-14 17:14:09,988 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689354846030] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689354846030,5,FailOnTimeoutGroup] 2023-07-14 17:14:09,988 DEBUG [M:0;jenkins-hbase20:39335] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-14 17:14:09,988 INFO [M:0;jenkins-hbase20:39335] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-14 17:14:09,988 INFO [M:0;jenkins-hbase20:39335] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-14 17:14:09,988 INFO [M:0;jenkins-hbase20:39335] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-07-14 17:14:09,988 DEBUG [M:0;jenkins-hbase20:39335] master.HMaster(1512): Stopping service threads 2023-07-14 17:14:09,988 INFO [M:0;jenkins-hbase20:39335] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-14 17:14:09,988 ERROR [M:0;jenkins-hbase20:39335] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-14 17:14:09,989 INFO [M:0;jenkins-hbase20:39335] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-14 17:14:09,989 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-14 17:14:09,989 DEBUG [M:0;jenkins-hbase20:39335] zookeeper.ZKUtil(398): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-14 17:14:09,989 WARN [M:0;jenkins-hbase20:39335] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-14 17:14:09,989 INFO [M:0;jenkins-hbase20:39335] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-14 17:14:09,989 INFO [M:0;jenkins-hbase20:39335] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-14 17:14:09,989 DEBUG [M:0;jenkins-hbase20:39335] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-14 17:14:09,989 INFO [M:0;jenkins-hbase20:39335] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:09,989 DEBUG [M:0;jenkins-hbase20:39335] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:09,989 DEBUG [M:0;jenkins-hbase20:39335] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-14 17:14:09,989 DEBUG [M:0;jenkins-hbase20:39335] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:09,989 INFO [M:0;jenkins-hbase20:39335] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.24 KB heapSize=90.71 KB 2023-07-14 17:14:09,995 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:09,995 INFO [RS:2;jenkins-hbase20:40919] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,40919,1689354845821; zookeeper connection closed. 2023-07-14 17:14:09,995 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:40919-0x1008c79ba620003, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:10,004 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7bcddfe8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7bcddfe8 2023-07-14 17:14:10,095 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:10,096 INFO [RS:0;jenkins-hbase20:45741] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,45741,1689354845740; zookeeper connection closed. 2023-07-14 17:14:10,096 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): regionserver:45741-0x1008c79ba620001, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:10,096 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7d9cf1ce] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7d9cf1ce 2023-07-14 17:14:10,096 INFO [Listener at localhost.localdomain/41959] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-14 17:14:10,414 INFO [M:0;jenkins-hbase20:39335] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.24 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/00436d1e5408499a9d0d72e262702d9d 2023-07-14 17:14:10,421 DEBUG [M:0;jenkins-hbase20:39335] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/00436d1e5408499a9d0d72e262702d9d as hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/00436d1e5408499a9d0d72e262702d9d 2023-07-14 17:14:10,428 INFO [M:0;jenkins-hbase20:39335] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43505/user/jenkins/test-data/adf473e0-b784-cd20-5d82-9bd56211e644/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/00436d1e5408499a9d0d72e262702d9d, entries=22, sequenceid=175, filesize=11.1 K 2023-07-14 17:14:10,429 INFO [M:0;jenkins-hbase20:39335] regionserver.HRegion(2948): Finished flush of dataSize ~76.24 KB/78066, heapSize ~90.70 KB/92872, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 440ms, sequenceid=175, compaction requested=false 2023-07-14 17:14:10,431 INFO [M:0;jenkins-hbase20:39335] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 17:14:10,431 DEBUG [M:0;jenkins-hbase20:39335] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 17:14:10,435 INFO [M:0;jenkins-hbase20:39335] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-14 17:14:10,435 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 17:14:10,436 INFO [M:0;jenkins-hbase20:39335] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:39335 2023-07-14 17:14:10,437 DEBUG [M:0;jenkins-hbase20:39335] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,39335,1689354845688 already deleted, retry=false 2023-07-14 17:14:10,538 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:10,538 DEBUG [Listener at localhost.localdomain/41959-EventThread] zookeeper.ZKWatcher(600): master:39335-0x1008c79ba620000, quorum=127.0.0.1:53758, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 17:14:10,538 INFO [M:0;jenkins-hbase20:39335] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,39335,1689354845688; zookeeper connection closed. 2023-07-14 17:14:10,541 WARN [Listener at localhost.localdomain/41959] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 17:14:10,546 INFO [Listener at localhost.localdomain/41959] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 17:14:10,649 WARN [BP-147832095-148.251.75.209-1689354845074 heartbeating to localhost.localdomain/127.0.0.1:43505] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 17:14:10,650 WARN [BP-147832095-148.251.75.209-1689354845074 heartbeating to localhost.localdomain/127.0.0.1:43505] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-147832095-148.251.75.209-1689354845074 (Datanode Uuid 9e90f798-876f-4c34-a2a3-0867cd04ad27) service to localhost.localdomain/127.0.0.1:43505 2023-07-14 17:14:10,650 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data5/current/BP-147832095-148.251.75.209-1689354845074] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:14:10,651 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data6/current/BP-147832095-148.251.75.209-1689354845074] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:14:10,653 WARN [Listener at localhost.localdomain/41959] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 17:14:10,657 INFO [Listener at localhost.localdomain/41959] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 17:14:10,769 WARN [BP-147832095-148.251.75.209-1689354845074 heartbeating to localhost.localdomain/127.0.0.1:43505] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 17:14:10,769 WARN [BP-147832095-148.251.75.209-1689354845074 heartbeating to localhost.localdomain/127.0.0.1:43505] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-147832095-148.251.75.209-1689354845074 (Datanode Uuid 045f435a-ce6b-4a2e-88dc-9f4c7a5ef1d0) service to localhost.localdomain/127.0.0.1:43505 2023-07-14 17:14:10,771 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data3/current/BP-147832095-148.251.75.209-1689354845074] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:14:10,771 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data4/current/BP-147832095-148.251.75.209-1689354845074] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:14:10,775 WARN [Listener at localhost.localdomain/41959] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 17:14:10,779 INFO [Listener at localhost.localdomain/41959] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 17:14:10,886 WARN [BP-147832095-148.251.75.209-1689354845074 heartbeating to localhost.localdomain/127.0.0.1:43505] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 17:14:10,886 WARN [BP-147832095-148.251.75.209-1689354845074 heartbeating to localhost.localdomain/127.0.0.1:43505] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-147832095-148.251.75.209-1689354845074 (Datanode Uuid 43422b35-d4d0-4ea5-82ab-05b567aa19a2) service to localhost.localdomain/127.0.0.1:43505 2023-07-14 17:14:10,887 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data1/current/BP-147832095-148.251.75.209-1689354845074] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:14:10,888 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/990cabbd-ea35-f72e-ae13-064ac0c6715e/cluster_98824d24-fe4e-afab-206a-2b04a249e34a/dfs/data/data2/current/BP-147832095-148.251.75.209-1689354845074] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 17:14:10,900 INFO [Listener at localhost.localdomain/41959] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-14 17:14:11,021 INFO [Listener at localhost.localdomain/41959] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-14 17:14:11,051 INFO [Listener at localhost.localdomain/41959] hbase.HBaseTestingUtility(1293): Minicluster is down