2023-07-21 08:14:30,496 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a 2023-07-21 08:14:30,512 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-21 08:14:30,532 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 08:14:30,533 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5, deleteOnExit=true 2023-07-21 08:14:30,533 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 08:14:30,534 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/test.cache.data in system properties and HBase conf 2023-07-21 08:14:30,534 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 08:14:30,535 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.log.dir in system properties and HBase conf 2023-07-21 08:14:30,535 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 08:14:30,535 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 08:14:30,536 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 08:14:30,654 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-21 08:14:31,037 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 08:14:31,043 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 08:14:31,043 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 08:14:31,044 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 08:14:31,044 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 08:14:31,044 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 08:14:31,045 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 08:14:31,045 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 08:14:31,045 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 08:14:31,046 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 08:14:31,046 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/nfs.dump.dir in system properties and HBase conf 2023-07-21 08:14:31,047 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/java.io.tmpdir in system properties and HBase conf 2023-07-21 08:14:31,047 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 08:14:31,048 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 08:14:31,048 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 08:14:31,558 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 08:14:31,562 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 08:14:31,839 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-21 08:14:32,015 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-21 08:14:32,036 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 08:14:32,081 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 08:14:32,128 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/java.io.tmpdir/Jetty_localhost_37323_hdfs____l2xbi2/webapp 2023-07-21 08:14:32,291 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37323 2023-07-21 08:14:32,302 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 08:14:32,302 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 08:14:32,763 WARN [Listener at localhost/40383] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 08:14:32,826 WARN [Listener at localhost/40383] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 08:14:32,848 WARN [Listener at localhost/40383] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 08:14:32,855 INFO [Listener at localhost/40383] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 08:14:32,860 INFO [Listener at localhost/40383] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/java.io.tmpdir/Jetty_localhost_44499_datanode____v7eim3/webapp 2023-07-21 08:14:32,968 INFO [Listener at localhost/40383] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44499 2023-07-21 08:14:33,384 WARN [Listener at localhost/33263] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 08:14:33,401 WARN [Listener at localhost/33263] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 08:14:33,408 WARN [Listener at localhost/33263] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 08:14:33,410 INFO [Listener at localhost/33263] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 08:14:33,416 INFO [Listener at localhost/33263] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/java.io.tmpdir/Jetty_localhost_38653_datanode____gfxrna/webapp 2023-07-21 08:14:33,525 INFO [Listener at localhost/33263] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38653 2023-07-21 08:14:33,538 WARN [Listener at localhost/45055] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 08:14:33,580 WARN [Listener at localhost/45055] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 08:14:33,584 WARN [Listener at localhost/45055] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 08:14:33,586 INFO [Listener at localhost/45055] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 08:14:33,594 INFO [Listener at localhost/45055] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/java.io.tmpdir/Jetty_localhost_40553_datanode____aenx8e/webapp 2023-07-21 08:14:33,723 INFO [Listener at localhost/45055] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40553 2023-07-21 08:14:33,743 WARN [Listener at localhost/43961] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 08:14:33,967 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x900cc7f1cd00300d: Processing first storage report for DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e from datanode 436b67bb-3846-4788-ade9-6e39b308acdd 2023-07-21 08:14:33,969 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x900cc7f1cd00300d: from storage DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e node DatanodeRegistration(127.0.0.1:46363, datanodeUuid=436b67bb-3846-4788-ade9-6e39b308acdd, infoPort=33199, infoSecurePort=0, ipcPort=33263, storageInfo=lv=-57;cid=testClusterID;nsid=277399721;c=1689927271632), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-21 08:14:33,970 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x95a26afc1a8981a0: Processing first storage report for DS-0fc516ec-6407-40a6-988b-0877a18a36a1 from datanode 40983784-f99d-46a1-b27c-40ed8e83e242 2023-07-21 08:14:33,970 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x95a26afc1a8981a0: from storage DS-0fc516ec-6407-40a6-988b-0877a18a36a1 node DatanodeRegistration(127.0.0.1:40235, datanodeUuid=40983784-f99d-46a1-b27c-40ed8e83e242, infoPort=38097, infoSecurePort=0, ipcPort=43961, storageInfo=lv=-57;cid=testClusterID;nsid=277399721;c=1689927271632), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:14:33,970 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb3c0d819a6c8ad73: Processing first storage report for DS-eca07878-4005-417c-888b-ba108a64f751 from datanode 3e9c857b-bfa6-4261-9d27-1da317b47ae6 2023-07-21 08:14:33,970 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb3c0d819a6c8ad73: from storage DS-eca07878-4005-417c-888b-ba108a64f751 node DatanodeRegistration(127.0.0.1:40079, datanodeUuid=3e9c857b-bfa6-4261-9d27-1da317b47ae6, infoPort=43481, infoSecurePort=0, ipcPort=45055, storageInfo=lv=-57;cid=testClusterID;nsid=277399721;c=1689927271632), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:14:33,971 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x900cc7f1cd00300d: Processing first storage report for DS-439fab3c-20b4-4b99-ad4e-c6ec61ea22b2 from datanode 436b67bb-3846-4788-ade9-6e39b308acdd 2023-07-21 08:14:33,971 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x900cc7f1cd00300d: from storage DS-439fab3c-20b4-4b99-ad4e-c6ec61ea22b2 node DatanodeRegistration(127.0.0.1:46363, datanodeUuid=436b67bb-3846-4788-ade9-6e39b308acdd, infoPort=33199, infoSecurePort=0, ipcPort=33263, storageInfo=lv=-57;cid=testClusterID;nsid=277399721;c=1689927271632), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:14:33,971 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x95a26afc1a8981a0: Processing first storage report for DS-38e5e195-5413-4111-988a-fa59dc0a4080 from datanode 40983784-f99d-46a1-b27c-40ed8e83e242 2023-07-21 08:14:33,971 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x95a26afc1a8981a0: from storage DS-38e5e195-5413-4111-988a-fa59dc0a4080 node DatanodeRegistration(127.0.0.1:40235, datanodeUuid=40983784-f99d-46a1-b27c-40ed8e83e242, infoPort=38097, infoSecurePort=0, ipcPort=43961, storageInfo=lv=-57;cid=testClusterID;nsid=277399721;c=1689927271632), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:14:33,971 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb3c0d819a6c8ad73: Processing first storage report for DS-7997c88d-db24-4da0-a7f6-9457860f1fb6 from datanode 3e9c857b-bfa6-4261-9d27-1da317b47ae6 2023-07-21 08:14:33,972 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb3c0d819a6c8ad73: from storage DS-7997c88d-db24-4da0-a7f6-9457860f1fb6 node DatanodeRegistration(127.0.0.1:40079, datanodeUuid=3e9c857b-bfa6-4261-9d27-1da317b47ae6, infoPort=43481, infoSecurePort=0, ipcPort=45055, storageInfo=lv=-57;cid=testClusterID;nsid=277399721;c=1689927271632), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 08:14:34,134 DEBUG [Listener at localhost/43961] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a 2023-07-21 08:14:34,222 INFO [Listener at localhost/43961] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5/zookeeper_0, clientPort=59404, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 08:14:34,236 INFO [Listener at localhost/43961] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59404 2023-07-21 08:14:34,248 INFO [Listener at localhost/43961] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:14:34,251 INFO [Listener at localhost/43961] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:14:34,902 INFO [Listener at localhost/43961] util.FSUtils(471): Created version file at hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b with version=8 2023-07-21 08:14:34,902 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/hbase-staging 2023-07-21 08:14:34,912 DEBUG [Listener at localhost/43961] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 08:14:34,912 DEBUG [Listener at localhost/43961] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 08:14:34,912 DEBUG [Listener at localhost/43961] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 08:14:34,912 DEBUG [Listener at localhost/43961] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 08:14:35,338 INFO [Listener at localhost/43961] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-21 08:14:35,891 INFO [Listener at localhost/43961] client.ConnectionUtils(127): master/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:14:35,928 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:35,929 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:35,929 INFO [Listener at localhost/43961] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:14:35,929 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:35,929 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:14:36,067 INFO [Listener at localhost/43961] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:14:36,144 DEBUG [Listener at localhost/43961] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-21 08:14:36,238 INFO [Listener at localhost/43961] ipc.NettyRpcServer(120): Bind to /172.31.10.131:46585 2023-07-21 08:14:36,249 INFO [Listener at localhost/43961] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:14:36,251 INFO [Listener at localhost/43961] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:14:36,279 INFO [Listener at localhost/43961] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46585 connecting to ZooKeeper ensemble=127.0.0.1:59404 2023-07-21 08:14:36,340 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:465850x0, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:14:36,343 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46585-0x101f28e99290000 connected 2023-07-21 08:14:36,369 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(164): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:14:36,370 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(164): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:14:36,373 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(164): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:14:36,382 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46585 2023-07-21 08:14:36,382 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46585 2023-07-21 08:14:36,382 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46585 2023-07-21 08:14:36,383 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46585 2023-07-21 08:14:36,383 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46585 2023-07-21 08:14:36,415 INFO [Listener at localhost/43961] log.Log(170): Logging initialized @6627ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-21 08:14:36,544 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:14:36,544 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:14:36,545 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:14:36,547 INFO [Listener at localhost/43961] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 08:14:36,547 INFO [Listener at localhost/43961] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:14:36,547 INFO [Listener at localhost/43961] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:14:36,551 INFO [Listener at localhost/43961] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:14:36,611 INFO [Listener at localhost/43961] http.HttpServer(1146): Jetty bound to port 46407 2023-07-21 08:14:36,612 INFO [Listener at localhost/43961] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:14:36,639 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:36,642 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@966e0ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:14:36,642 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:36,643 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5310d071{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:14:36,826 INFO [Listener at localhost/43961] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:14:36,838 INFO [Listener at localhost/43961] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:14:36,838 INFO [Listener at localhost/43961] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:14:36,840 INFO [Listener at localhost/43961] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 08:14:36,846 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:36,871 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@9ca6b1f{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/java.io.tmpdir/jetty-0_0_0_0-46407-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5729093176809971481/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 08:14:36,883 INFO [Listener at localhost/43961] server.AbstractConnector(333): Started ServerConnector@668fa014{HTTP/1.1, (http/1.1)}{0.0.0.0:46407} 2023-07-21 08:14:36,883 INFO [Listener at localhost/43961] server.Server(415): Started @7096ms 2023-07-21 08:14:36,887 INFO [Listener at localhost/43961] master.HMaster(444): hbase.rootdir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b, hbase.cluster.distributed=false 2023-07-21 08:14:36,957 INFO [Listener at localhost/43961] client.ConnectionUtils(127): regionserver/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:14:36,957 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:36,957 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:36,957 INFO [Listener at localhost/43961] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:14:36,958 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:36,958 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:14:36,963 INFO [Listener at localhost/43961] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:14:36,966 INFO [Listener at localhost/43961] ipc.NettyRpcServer(120): Bind to /172.31.10.131:40889 2023-07-21 08:14:36,968 INFO [Listener at localhost/43961] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 08:14:36,975 DEBUG [Listener at localhost/43961] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 08:14:36,976 INFO [Listener at localhost/43961] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:14:36,978 INFO [Listener at localhost/43961] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:14:36,979 INFO [Listener at localhost/43961] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40889 connecting to ZooKeeper ensemble=127.0.0.1:59404 2023-07-21 08:14:36,983 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:408890x0, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:14:36,984 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(164): regionserver:408890x0, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:14:36,985 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40889-0x101f28e99290001 connected 2023-07-21 08:14:36,986 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(164): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:14:36,987 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(164): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:14:36,987 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40889 2023-07-21 08:14:36,987 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40889 2023-07-21 08:14:36,992 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40889 2023-07-21 08:14:36,992 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40889 2023-07-21 08:14:36,992 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40889 2023-07-21 08:14:36,995 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:14:36,995 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:14:36,996 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:14:36,997 INFO [Listener at localhost/43961] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 08:14:36,997 INFO [Listener at localhost/43961] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:14:36,997 INFO [Listener at localhost/43961] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:14:36,997 INFO [Listener at localhost/43961] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:14:37,000 INFO [Listener at localhost/43961] http.HttpServer(1146): Jetty bound to port 45829 2023-07-21 08:14:37,000 INFO [Listener at localhost/43961] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:14:37,006 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:37,006 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2ffb745f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:14:37,007 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:37,007 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@53b9762b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:14:37,136 INFO [Listener at localhost/43961] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:14:37,138 INFO [Listener at localhost/43961] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:14:37,139 INFO [Listener at localhost/43961] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:14:37,139 INFO [Listener at localhost/43961] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 08:14:37,140 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:37,144 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6a9e2012{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/java.io.tmpdir/jetty-0_0_0_0-45829-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5224106776746084151/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:14:37,145 INFO [Listener at localhost/43961] server.AbstractConnector(333): Started ServerConnector@69156046{HTTP/1.1, (http/1.1)}{0.0.0.0:45829} 2023-07-21 08:14:37,145 INFO [Listener at localhost/43961] server.Server(415): Started @7358ms 2023-07-21 08:14:37,158 INFO [Listener at localhost/43961] client.ConnectionUtils(127): regionserver/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:14:37,158 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:37,159 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:37,159 INFO [Listener at localhost/43961] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:14:37,160 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:37,160 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:14:37,160 INFO [Listener at localhost/43961] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:14:37,162 INFO [Listener at localhost/43961] ipc.NettyRpcServer(120): Bind to /172.31.10.131:37025 2023-07-21 08:14:37,162 INFO [Listener at localhost/43961] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 08:14:37,164 DEBUG [Listener at localhost/43961] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 08:14:37,165 INFO [Listener at localhost/43961] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:14:37,166 INFO [Listener at localhost/43961] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:14:37,167 INFO [Listener at localhost/43961] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37025 connecting to ZooKeeper ensemble=127.0.0.1:59404 2023-07-21 08:14:37,171 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:370250x0, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:14:37,172 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37025-0x101f28e99290002 connected 2023-07-21 08:14:37,172 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(164): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:14:37,173 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(164): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:14:37,174 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(164): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:14:37,174 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37025 2023-07-21 08:14:37,180 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37025 2023-07-21 08:14:37,180 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37025 2023-07-21 08:14:37,180 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37025 2023-07-21 08:14:37,182 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37025 2023-07-21 08:14:37,185 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:14:37,185 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:14:37,185 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:14:37,186 INFO [Listener at localhost/43961] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 08:14:37,186 INFO [Listener at localhost/43961] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:14:37,186 INFO [Listener at localhost/43961] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:14:37,187 INFO [Listener at localhost/43961] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:14:37,187 INFO [Listener at localhost/43961] http.HttpServer(1146): Jetty bound to port 45753 2023-07-21 08:14:37,188 INFO [Listener at localhost/43961] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:14:37,191 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:37,192 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@19459c3b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:14:37,192 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:37,193 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@33446a5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:14:37,324 INFO [Listener at localhost/43961] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:14:37,325 INFO [Listener at localhost/43961] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:14:37,325 INFO [Listener at localhost/43961] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:14:37,325 INFO [Listener at localhost/43961] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 08:14:37,327 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:37,328 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1b56eac1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/java.io.tmpdir/jetty-0_0_0_0-45753-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4464821784127240367/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:14:37,329 INFO [Listener at localhost/43961] server.AbstractConnector(333): Started ServerConnector@5d0a3d54{HTTP/1.1, (http/1.1)}{0.0.0.0:45753} 2023-07-21 08:14:37,330 INFO [Listener at localhost/43961] server.Server(415): Started @7542ms 2023-07-21 08:14:37,347 INFO [Listener at localhost/43961] client.ConnectionUtils(127): regionserver/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:14:37,347 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:37,347 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:37,347 INFO [Listener at localhost/43961] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:14:37,348 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:37,348 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:14:37,348 INFO [Listener at localhost/43961] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:14:37,350 INFO [Listener at localhost/43961] ipc.NettyRpcServer(120): Bind to /172.31.10.131:40169 2023-07-21 08:14:37,350 INFO [Listener at localhost/43961] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 08:14:37,352 DEBUG [Listener at localhost/43961] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 08:14:37,353 INFO [Listener at localhost/43961] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:14:37,355 INFO [Listener at localhost/43961] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:14:37,357 INFO [Listener at localhost/43961] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40169 connecting to ZooKeeper ensemble=127.0.0.1:59404 2023-07-21 08:14:37,367 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:401690x0, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:14:37,368 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(164): regionserver:401690x0, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:14:37,369 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(164): regionserver:401690x0, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:14:37,370 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(164): regionserver:401690x0, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:14:37,374 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40169-0x101f28e99290003 connected 2023-07-21 08:14:37,374 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40169 2023-07-21 08:14:37,374 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40169 2023-07-21 08:14:37,375 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40169 2023-07-21 08:14:37,375 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40169 2023-07-21 08:14:37,377 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40169 2023-07-21 08:14:37,379 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:14:37,379 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:14:37,380 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:14:37,380 INFO [Listener at localhost/43961] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 08:14:37,380 INFO [Listener at localhost/43961] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:14:37,380 INFO [Listener at localhost/43961] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:14:37,381 INFO [Listener at localhost/43961] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:14:37,382 INFO [Listener at localhost/43961] http.HttpServer(1146): Jetty bound to port 39057 2023-07-21 08:14:37,382 INFO [Listener at localhost/43961] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:14:37,384 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:37,385 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5f20ff62{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:14:37,385 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:37,386 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2855a58d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:14:37,521 INFO [Listener at localhost/43961] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:14:37,522 INFO [Listener at localhost/43961] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:14:37,523 INFO [Listener at localhost/43961] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:14:37,523 INFO [Listener at localhost/43961] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 08:14:37,524 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:37,525 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7cc51cf3{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/java.io.tmpdir/jetty-0_0_0_0-39057-hbase-server-2_4_18-SNAPSHOT_jar-_-any-797339076202463375/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:14:37,527 INFO [Listener at localhost/43961] server.AbstractConnector(333): Started ServerConnector@3023e605{HTTP/1.1, (http/1.1)}{0.0.0.0:39057} 2023-07-21 08:14:37,527 INFO [Listener at localhost/43961] server.Server(415): Started @7739ms 2023-07-21 08:14:37,533 INFO [master/jenkins-hbase5:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:14:37,539 INFO [master/jenkins-hbase5:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@534dab38{HTTP/1.1, (http/1.1)}{0.0.0.0:43029} 2023-07-21 08:14:37,540 INFO [master/jenkins-hbase5:0:becomeActiveMaster] server.Server(415): Started @7752ms 2023-07-21 08:14:37,540 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase5.apache.org,46585,1689927275104 2023-07-21 08:14:37,550 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 08:14:37,552 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase5.apache.org,46585,1689927275104 2023-07-21 08:14:37,574 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 08:14:37,575 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 08:14:37,574 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 08:14:37,574 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 08:14:37,576 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:14:37,577 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 08:14:37,579 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 08:14:37,579 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase5.apache.org,46585,1689927275104 from backup master directory 2023-07-21 08:14:37,583 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase5.apache.org,46585,1689927275104 2023-07-21 08:14:37,583 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 08:14:37,584 WARN [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:14:37,584 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase5.apache.org,46585,1689927275104 2023-07-21 08:14:37,588 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-21 08:14:37,589 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-21 08:14:37,700 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/hbase.id with ID: 2d38dc86-158b-47bb-a505-fbc95c81dab0 2023-07-21 08:14:37,747 INFO [master/jenkins-hbase5:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:14:37,767 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:14:37,831 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x21869011 to 127.0.0.1:59404 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:14:37,866 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@79220bda, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:14:37,894 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:14:37,896 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 08:14:37,917 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-21 08:14:37,917 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-21 08:14:37,918 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 08:14:37,923 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 08:14:37,924 INFO [master/jenkins-hbase5:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:14:37,962 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData/data/master/store-tmp 2023-07-21 08:14:38,004 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:38,004 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 08:14:38,004 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:14:38,004 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:14:38,005 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 08:14:38,005 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:14:38,005 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:14:38,005 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 08:14:38,007 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData/WALs/jenkins-hbase5.apache.org,46585,1689927275104 2023-07-21 08:14:38,028 INFO [master/jenkins-hbase5:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C46585%2C1689927275104, suffix=, logDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData/WALs/jenkins-hbase5.apache.org,46585,1689927275104, archiveDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData/oldWALs, maxLogs=10 2023-07-21 08:14:38,091 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK] 2023-07-21 08:14:38,091 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK] 2023-07-21 08:14:38,098 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK] 2023-07-21 08:14:38,101 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 08:14:38,195 INFO [master/jenkins-hbase5:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData/WALs/jenkins-hbase5.apache.org,46585,1689927275104/jenkins-hbase5.apache.org%2C46585%2C1689927275104.1689927278039 2023-07-21 08:14:38,196 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK], DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK], DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK]] 2023-07-21 08:14:38,197 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:38,198 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:38,202 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:14:38,204 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:14:38,308 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:14:38,315 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 08:14:38,346 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 08:14:38,358 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:38,363 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:14:38,366 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:14:38,381 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:14:38,389 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:38,391 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11353309600, jitterRate=0.057359352707862854}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:38,391 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 08:14:38,392 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 08:14:38,423 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 08:14:38,423 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 08:14:38,426 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 08:14:38,428 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-21 08:14:38,469 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 41 msec 2023-07-21 08:14:38,470 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 08:14:38,496 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 08:14:38,503 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 08:14:38,510 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 08:14:38,516 INFO [master/jenkins-hbase5:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 08:14:38,522 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 08:14:38,576 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:14:38,577 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 08:14:38,578 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 08:14:38,609 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 08:14:38,616 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 08:14:38,616 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 08:14:38,616 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:14:38,616 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 08:14:38,616 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 08:14:38,619 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase5.apache.org,46585,1689927275104, sessionid=0x101f28e99290000, setting cluster-up flag (Was=false) 2023-07-21 08:14:38,639 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:14:38,647 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 08:14:38,648 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase5.apache.org,46585,1689927275104 2023-07-21 08:14:38,654 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:14:38,661 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 08:14:38,663 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase5.apache.org,46585,1689927275104 2023-07-21 08:14:38,665 WARN [master/jenkins-hbase5:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.hbase-snapshot/.tmp 2023-07-21 08:14:38,736 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(951): ClusterId : 2d38dc86-158b-47bb-a505-fbc95c81dab0 2023-07-21 08:14:38,737 INFO [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(951): ClusterId : 2d38dc86-158b-47bb-a505-fbc95c81dab0 2023-07-21 08:14:38,739 INFO [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(951): ClusterId : 2d38dc86-158b-47bb-a505-fbc95c81dab0 2023-07-21 08:14:38,746 DEBUG [RS:1;jenkins-hbase5:37025] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 08:14:38,747 DEBUG [RS:2;jenkins-hbase5:40169] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 08:14:38,746 DEBUG [RS:0;jenkins-hbase5:40889] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 08:14:38,750 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 08:14:38,756 DEBUG [RS:1;jenkins-hbase5:37025] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 08:14:38,756 DEBUG [RS:2;jenkins-hbase5:40169] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 08:14:38,756 DEBUG [RS:0;jenkins-hbase5:40889] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 08:14:38,756 DEBUG [RS:2;jenkins-hbase5:40169] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 08:14:38,756 DEBUG [RS:1;jenkins-hbase5:37025] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 08:14:38,756 DEBUG [RS:0;jenkins-hbase5:40889] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 08:14:38,761 DEBUG [RS:2;jenkins-hbase5:40169] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 08:14:38,761 DEBUG [RS:1;jenkins-hbase5:37025] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 08:14:38,764 DEBUG [RS:2;jenkins-hbase5:40169] zookeeper.ReadOnlyZKClient(139): Connect 0x34a1cbb3 to 127.0.0.1:59404 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:14:38,764 DEBUG [RS:1;jenkins-hbase5:37025] zookeeper.ReadOnlyZKClient(139): Connect 0x22788eee to 127.0.0.1:59404 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:14:38,765 DEBUG [RS:0;jenkins-hbase5:40889] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 08:14:38,767 DEBUG [RS:0;jenkins-hbase5:40889] zookeeper.ReadOnlyZKClient(139): Connect 0x28021cd8 to 127.0.0.1:59404 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:14:38,768 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 08:14:38,781 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:14:38,784 INFO [master/jenkins-hbase5:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 08:14:38,784 INFO [master/jenkins-hbase5:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 08:14:38,787 DEBUG [RS:2;jenkins-hbase5:40169] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71a35343, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:14:38,787 DEBUG [RS:2;jenkins-hbase5:40169] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@20faa0c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:14:38,788 DEBUG [RS:1;jenkins-hbase5:37025] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@46543391, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:14:38,788 DEBUG [RS:1;jenkins-hbase5:37025] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@294f3844, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:14:38,789 DEBUG [RS:0;jenkins-hbase5:40889] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@47fc1590, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:14:38,790 DEBUG [RS:0;jenkins-hbase5:40889] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a728a34, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:14:38,858 DEBUG [RS:2;jenkins-hbase5:40169] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase5:40169 2023-07-21 08:14:38,866 INFO [RS:2;jenkins-hbase5:40169] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 08:14:38,866 INFO [RS:2;jenkins-hbase5:40169] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 08:14:38,866 DEBUG [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 08:14:38,865 DEBUG [RS:0;jenkins-hbase5:40889] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase5:40889 2023-07-21 08:14:38,867 DEBUG [RS:1;jenkins-hbase5:37025] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase5:37025 2023-07-21 08:14:38,879 INFO [RS:1;jenkins-hbase5:37025] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 08:14:38,879 INFO [RS:1;jenkins-hbase5:37025] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 08:14:38,879 DEBUG [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 08:14:38,876 INFO [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,46585,1689927275104 with isa=jenkins-hbase5.apache.org/172.31.10.131:40169, startcode=1689927277346 2023-07-21 08:14:38,872 INFO [RS:0;jenkins-hbase5:40889] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 08:14:38,880 INFO [RS:0;jenkins-hbase5:40889] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 08:14:38,880 DEBUG [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 08:14:38,882 INFO [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,46585,1689927275104 with isa=jenkins-hbase5.apache.org/172.31.10.131:37025, startcode=1689927277157 2023-07-21 08:14:38,882 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,46585,1689927275104 with isa=jenkins-hbase5.apache.org/172.31.10.131:40889, startcode=1689927276956 2023-07-21 08:14:38,907 DEBUG [RS:2;jenkins-hbase5:40169] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 08:14:38,912 DEBUG [RS:1;jenkins-hbase5:37025] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 08:14:38,907 DEBUG [RS:0;jenkins-hbase5:40889] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 08:14:38,988 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 08:14:38,999 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:42583, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 08:14:38,999 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:43339, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 08:14:38,999 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:49421, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 08:14:39,010 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:39,023 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:39,025 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:39,042 INFO [master/jenkins-hbase5:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 08:14:39,051 DEBUG [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 08:14:39,051 DEBUG [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 08:14:39,051 WARN [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 08:14:39,052 WARN [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 08:14:39,051 DEBUG [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 08:14:39,052 WARN [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 08:14:39,053 INFO [master/jenkins-hbase5:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 08:14:39,054 INFO [master/jenkins-hbase5:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 08:14:39,054 INFO [master/jenkins-hbase5:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 08:14:39,056 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase5:0, corePoolSize=5, maxPoolSize=5 2023-07-21 08:14:39,056 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase5:0, corePoolSize=5, maxPoolSize=5 2023-07-21 08:14:39,056 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=5, maxPoolSize=5 2023-07-21 08:14:39,056 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=5, maxPoolSize=5 2023-07-21 08:14:39,056 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase5:0, corePoolSize=10, maxPoolSize=10 2023-07-21 08:14:39,057 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,057 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:14:39,057 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,066 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689927309066 2023-07-21 08:14:39,070 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 08:14:39,073 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 08:14:39,074 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 08:14:39,074 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 08:14:39,077 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 08:14:39,085 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 08:14:39,085 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 08:14:39,086 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 08:14:39,086 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 08:14:39,087 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,089 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 08:14:39,091 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 08:14:39,091 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 08:14:39,100 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 08:14:39,101 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 08:14:39,103 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.large.0-1689927279103,5,FailOnTimeoutGroup] 2023-07-21 08:14:39,114 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.small.0-1689927279103,5,FailOnTimeoutGroup] 2023-07-21 08:14:39,114 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,114 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 08:14:39,116 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,117 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,154 INFO [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,46585,1689927275104 with isa=jenkins-hbase5.apache.org/172.31.10.131:40169, startcode=1689927277346 2023-07-21 08:14:39,154 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,46585,1689927275104 with isa=jenkins-hbase5.apache.org/172.31.10.131:40889, startcode=1689927276956 2023-07-21 08:14:39,154 INFO [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,46585,1689927275104 with isa=jenkins-hbase5.apache.org/172.31.10.131:37025, startcode=1689927277157 2023-07-21 08:14:39,167 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46585] master.ServerManager(394): Registering regionserver=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:39,169 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:14:39,169 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 08:14:39,171 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 08:14:39,172 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 08:14:39,172 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b 2023-07-21 08:14:39,176 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46585] master.ServerManager(394): Registering regionserver=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:39,176 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:14:39,177 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 08:14:39,177 DEBUG [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b 2023-07-21 08:14:39,177 DEBUG [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40383 2023-07-21 08:14:39,177 DEBUG [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46407 2023-07-21 08:14:39,178 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46585] master.ServerManager(394): Registering regionserver=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:39,178 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:14:39,179 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 08:14:39,179 DEBUG [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b 2023-07-21 08:14:39,180 DEBUG [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40383 2023-07-21 08:14:39,180 DEBUG [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46407 2023-07-21 08:14:39,181 DEBUG [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b 2023-07-21 08:14:39,181 DEBUG [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40383 2023-07-21 08:14:39,181 DEBUG [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46407 2023-07-21 08:14:39,190 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:14:39,198 DEBUG [RS:1;jenkins-hbase5:37025] zookeeper.ZKUtil(162): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:39,198 WARN [RS:1;jenkins-hbase5:37025] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:14:39,198 DEBUG [RS:0;jenkins-hbase5:40889] zookeeper.ZKUtil(162): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:39,198 DEBUG [RS:2;jenkins-hbase5:40169] zookeeper.ZKUtil(162): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:39,198 INFO [RS:1;jenkins-hbase5:37025] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:14:39,201 WARN [RS:2;jenkins-hbase5:40169] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:14:39,200 WARN [RS:0;jenkins-hbase5:40889] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:14:39,201 INFO [RS:2;jenkins-hbase5:40169] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:14:39,202 INFO [RS:0;jenkins-hbase5:40889] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:14:39,203 DEBUG [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:39,203 DEBUG [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:39,203 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase5.apache.org,40169,1689927277346] 2023-07-21 08:14:39,203 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase5.apache.org,37025,1689927277157] 2023-07-21 08:14:39,203 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase5.apache.org,40889,1689927276956] 2023-07-21 08:14:39,201 DEBUG [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:39,233 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:39,234 DEBUG [RS:1;jenkins-hbase5:37025] zookeeper.ZKUtil(162): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:39,234 DEBUG [RS:2;jenkins-hbase5:40169] zookeeper.ZKUtil(162): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:39,236 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 08:14:39,236 DEBUG [RS:2;jenkins-hbase5:40169] zookeeper.ZKUtil(162): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:39,237 DEBUG [RS:0;jenkins-hbase5:40889] zookeeper.ZKUtil(162): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:39,237 DEBUG [RS:1;jenkins-hbase5:37025] zookeeper.ZKUtil(162): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:39,237 DEBUG [RS:2;jenkins-hbase5:40169] zookeeper.ZKUtil(162): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:39,237 DEBUG [RS:0;jenkins-hbase5:40889] zookeeper.ZKUtil(162): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:39,237 DEBUG [RS:1;jenkins-hbase5:37025] zookeeper.ZKUtil(162): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:39,238 DEBUG [RS:0;jenkins-hbase5:40889] zookeeper.ZKUtil(162): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:39,241 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/info 2023-07-21 08:14:39,241 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 08:14:39,242 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:39,243 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 08:14:39,247 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/rep_barrier 2023-07-21 08:14:39,247 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 08:14:39,253 DEBUG [RS:0;jenkins-hbase5:40889] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 08:14:39,253 DEBUG [RS:2;jenkins-hbase5:40169] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 08:14:39,254 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:39,254 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 08:14:39,255 DEBUG [RS:1;jenkins-hbase5:37025] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 08:14:39,256 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/table 2023-07-21 08:14:39,257 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 08:14:39,258 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:39,260 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740 2023-07-21 08:14:39,261 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740 2023-07-21 08:14:39,269 INFO [RS:1;jenkins-hbase5:37025] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 08:14:39,271 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 08:14:39,269 INFO [RS:0;jenkins-hbase5:40889] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 08:14:39,269 INFO [RS:2;jenkins-hbase5:40169] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 08:14:39,274 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 08:14:39,284 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:39,285 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11875937120, jitterRate=0.1060328334569931}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 08:14:39,285 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 08:14:39,285 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 08:14:39,285 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 08:14:39,285 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 08:14:39,285 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 08:14:39,285 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 08:14:39,301 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 08:14:39,301 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 08:14:39,302 INFO [RS:0;jenkins-hbase5:40889] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 08:14:39,306 INFO [RS:1;jenkins-hbase5:37025] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 08:14:39,306 INFO [RS:2;jenkins-hbase5:40169] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 08:14:39,310 INFO [RS:0;jenkins-hbase5:40889] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 08:14:39,310 INFO [RS:0;jenkins-hbase5:40889] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,311 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 08:14:39,311 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 08:14:39,314 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 08:14:39,311 INFO [RS:1;jenkins-hbase5:37025] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 08:14:39,314 INFO [RS:2;jenkins-hbase5:40169] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 08:14:39,316 INFO [RS:1;jenkins-hbase5:37025] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,317 INFO [RS:2;jenkins-hbase5:40169] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,320 INFO [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 08:14:39,324 INFO [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 08:14:39,329 INFO [RS:0;jenkins-hbase5:40889] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,329 INFO [RS:2;jenkins-hbase5:40169] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,329 INFO [RS:1;jenkins-hbase5:37025] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,329 DEBUG [RS:0;jenkins-hbase5:40889] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,329 DEBUG [RS:2;jenkins-hbase5:40169] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,330 DEBUG [RS:0;jenkins-hbase5:40889] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,330 DEBUG [RS:1;jenkins-hbase5:37025] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,330 DEBUG [RS:0;jenkins-hbase5:40889] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,330 DEBUG [RS:1;jenkins-hbase5:37025] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,330 DEBUG [RS:0;jenkins-hbase5:40889] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,330 DEBUG [RS:1;jenkins-hbase5:37025] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,331 DEBUG [RS:0;jenkins-hbase5:40889] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,331 DEBUG [RS:1;jenkins-hbase5:37025] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,331 DEBUG [RS:0;jenkins-hbase5:40889] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:14:39,331 DEBUG [RS:1;jenkins-hbase5:37025] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,330 DEBUG [RS:2;jenkins-hbase5:40169] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,331 DEBUG [RS:0;jenkins-hbase5:40889] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,331 DEBUG [RS:1;jenkins-hbase5:37025] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:14:39,331 DEBUG [RS:0;jenkins-hbase5:40889] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,331 DEBUG [RS:1;jenkins-hbase5:37025] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,331 DEBUG [RS:0;jenkins-hbase5:40889] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,331 DEBUG [RS:1;jenkins-hbase5:37025] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,331 DEBUG [RS:0;jenkins-hbase5:40889] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,331 DEBUG [RS:1;jenkins-hbase5:37025] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,331 DEBUG [RS:2;jenkins-hbase5:40169] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,332 DEBUG [RS:1;jenkins-hbase5:37025] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,332 DEBUG [RS:2;jenkins-hbase5:40169] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,332 DEBUG [RS:2;jenkins-hbase5:40169] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,332 DEBUG [RS:2;jenkins-hbase5:40169] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:14:39,332 DEBUG [RS:2;jenkins-hbase5:40169] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,334 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 08:14:39,336 DEBUG [RS:2;jenkins-hbase5:40169] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,336 DEBUG [RS:2;jenkins-hbase5:40169] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,336 DEBUG [RS:2;jenkins-hbase5:40169] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:39,343 INFO [RS:0;jenkins-hbase5:40889] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,343 INFO [RS:0;jenkins-hbase5:40889] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,343 INFO [RS:0;jenkins-hbase5:40889] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,348 INFO [RS:1;jenkins-hbase5:37025] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,349 INFO [RS:1;jenkins-hbase5:37025] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,349 INFO [RS:1;jenkins-hbase5:37025] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,349 INFO [RS:2;jenkins-hbase5:40169] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,349 INFO [RS:2;jenkins-hbase5:40169] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,349 INFO [RS:2;jenkins-hbase5:40169] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,362 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 08:14:39,369 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 08:14:39,379 INFO [RS:0;jenkins-hbase5:40889] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 08:14:39,383 INFO [RS:0;jenkins-hbase5:40889] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,40889,1689927276956-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,385 INFO [RS:2;jenkins-hbase5:40169] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 08:14:39,386 INFO [RS:2;jenkins-hbase5:40169] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,40169,1689927277346-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,395 INFO [RS:1;jenkins-hbase5:37025] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 08:14:39,395 INFO [RS:1;jenkins-hbase5:37025] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,37025,1689927277157-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:39,408 INFO [RS:2;jenkins-hbase5:40169] regionserver.Replication(203): jenkins-hbase5.apache.org,40169,1689927277346 started 2023-07-21 08:14:39,408 INFO [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1637): Serving as jenkins-hbase5.apache.org,40169,1689927277346, RpcServer on jenkins-hbase5.apache.org/172.31.10.131:40169, sessionid=0x101f28e99290003 2023-07-21 08:14:39,408 DEBUG [RS:2;jenkins-hbase5:40169] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 08:14:39,408 DEBUG [RS:2;jenkins-hbase5:40169] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:39,408 DEBUG [RS:2;jenkins-hbase5:40169] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,40169,1689927277346' 2023-07-21 08:14:39,408 DEBUG [RS:2;jenkins-hbase5:40169] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 08:14:39,409 DEBUG [RS:2;jenkins-hbase5:40169] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 08:14:39,410 DEBUG [RS:2;jenkins-hbase5:40169] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 08:14:39,410 DEBUG [RS:2;jenkins-hbase5:40169] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 08:14:39,410 DEBUG [RS:2;jenkins-hbase5:40169] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:39,410 DEBUG [RS:2;jenkins-hbase5:40169] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,40169,1689927277346' 2023-07-21 08:14:39,410 DEBUG [RS:2;jenkins-hbase5:40169] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 08:14:39,412 INFO [RS:0;jenkins-hbase5:40889] regionserver.Replication(203): jenkins-hbase5.apache.org,40889,1689927276956 started 2023-07-21 08:14:39,412 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1637): Serving as jenkins-hbase5.apache.org,40889,1689927276956, RpcServer on jenkins-hbase5.apache.org/172.31.10.131:40889, sessionid=0x101f28e99290001 2023-07-21 08:14:39,412 DEBUG [RS:0;jenkins-hbase5:40889] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 08:14:39,412 DEBUG [RS:0;jenkins-hbase5:40889] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:39,416 DEBUG [RS:0;jenkins-hbase5:40889] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,40889,1689927276956' 2023-07-21 08:14:39,421 DEBUG [RS:0;jenkins-hbase5:40889] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 08:14:39,421 DEBUG [RS:2;jenkins-hbase5:40169] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 08:14:39,423 INFO [RS:1;jenkins-hbase5:37025] regionserver.Replication(203): jenkins-hbase5.apache.org,37025,1689927277157 started 2023-07-21 08:14:39,423 INFO [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(1637): Serving as jenkins-hbase5.apache.org,37025,1689927277157, RpcServer on jenkins-hbase5.apache.org/172.31.10.131:37025, sessionid=0x101f28e99290002 2023-07-21 08:14:39,423 DEBUG [RS:1;jenkins-hbase5:37025] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 08:14:39,423 DEBUG [RS:1;jenkins-hbase5:37025] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:39,423 DEBUG [RS:1;jenkins-hbase5:37025] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,37025,1689927277157' 2023-07-21 08:14:39,423 DEBUG [RS:1;jenkins-hbase5:37025] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 08:14:39,424 DEBUG [RS:0;jenkins-hbase5:40889] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 08:14:39,424 DEBUG [RS:1;jenkins-hbase5:37025] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 08:14:39,424 DEBUG [RS:2;jenkins-hbase5:40169] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 08:14:39,424 INFO [RS:2;jenkins-hbase5:40169] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 08:14:39,424 INFO [RS:2;jenkins-hbase5:40169] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 08:14:39,432 DEBUG [RS:0;jenkins-hbase5:40889] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 08:14:39,432 DEBUG [RS:0;jenkins-hbase5:40889] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 08:14:39,432 DEBUG [RS:1;jenkins-hbase5:37025] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 08:14:39,432 DEBUG [RS:0;jenkins-hbase5:40889] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:39,433 DEBUG [RS:1;jenkins-hbase5:37025] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 08:14:39,433 DEBUG [RS:1;jenkins-hbase5:37025] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:39,433 DEBUG [RS:1;jenkins-hbase5:37025] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,37025,1689927277157' 2023-07-21 08:14:39,433 DEBUG [RS:1;jenkins-hbase5:37025] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 08:14:39,433 DEBUG [RS:0;jenkins-hbase5:40889] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,40889,1689927276956' 2023-07-21 08:14:39,433 DEBUG [RS:0;jenkins-hbase5:40889] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 08:14:39,434 DEBUG [RS:1;jenkins-hbase5:37025] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 08:14:39,434 DEBUG [RS:1;jenkins-hbase5:37025] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 08:14:39,434 DEBUG [RS:0;jenkins-hbase5:40889] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 08:14:39,434 INFO [RS:1;jenkins-hbase5:37025] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 08:14:39,435 INFO [RS:1;jenkins-hbase5:37025] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 08:14:39,436 DEBUG [RS:0;jenkins-hbase5:40889] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 08:14:39,436 INFO [RS:0;jenkins-hbase5:40889] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 08:14:39,436 INFO [RS:0;jenkins-hbase5:40889] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 08:14:39,521 DEBUG [jenkins-hbase5:46585] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 08:14:39,540 INFO [RS:1;jenkins-hbase5:37025] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C37025%2C1689927277157, suffix=, logDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,37025,1689927277157, archiveDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/oldWALs, maxLogs=32 2023-07-21 08:14:39,540 INFO [RS:0;jenkins-hbase5:40889] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C40889%2C1689927276956, suffix=, logDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,40889,1689927276956, archiveDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/oldWALs, maxLogs=32 2023-07-21 08:14:39,542 DEBUG [jenkins-hbase5:46585] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:39,544 DEBUG [jenkins-hbase5:46585] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:39,544 DEBUG [jenkins-hbase5:46585] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:39,544 DEBUG [jenkins-hbase5:46585] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:14:39,544 DEBUG [jenkins-hbase5:46585] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:39,548 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase5.apache.org,40169,1689927277346, state=OPENING 2023-07-21 08:14:39,558 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 08:14:39,559 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:14:39,560 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 08:14:39,574 INFO [RS:2;jenkins-hbase5:40169] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C40169%2C1689927277346, suffix=, logDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,40169,1689927277346, archiveDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/oldWALs, maxLogs=32 2023-07-21 08:14:39,581 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:14:39,588 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK] 2023-07-21 08:14:39,589 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK] 2023-07-21 08:14:39,604 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK] 2023-07-21 08:14:39,648 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK] 2023-07-21 08:14:39,649 WARN [ReadOnlyZKClient-127.0.0.1:59404@0x21869011] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 08:14:39,649 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK] 2023-07-21 08:14:39,663 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK] 2023-07-21 08:14:39,664 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK] 2023-07-21 08:14:39,664 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK] 2023-07-21 08:14:39,665 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK] 2023-07-21 08:14:39,666 INFO [RS:0;jenkins-hbase5:40889] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,40889,1689927276956/jenkins-hbase5.apache.org%2C40889%2C1689927276956.1689927279548 2023-07-21 08:14:39,674 DEBUG [RS:0;jenkins-hbase5:40889] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK], DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK], DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK]] 2023-07-21 08:14:39,685 INFO [RS:2;jenkins-hbase5:40169] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,40169,1689927277346/jenkins-hbase5.apache.org%2C40169%2C1689927277346.1689927279581 2023-07-21 08:14:39,688 DEBUG [RS:2;jenkins-hbase5:40169] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK], DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK], DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK]] 2023-07-21 08:14:39,689 INFO [RS:1;jenkins-hbase5:37025] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,37025,1689927277157/jenkins-hbase5.apache.org%2C37025%2C1689927277157.1689927279547 2023-07-21 08:14:39,692 DEBUG [RS:1;jenkins-hbase5:37025] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK], DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK], DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK]] 2023-07-21 08:14:39,696 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,46585,1689927275104] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:14:39,700 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:54466, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:14:39,701 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40169] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.10.131:54466 deadline: 1689927339700, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:39,786 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:39,792 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:14:39,799 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:54470, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:14:39,813 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 08:14:39,813 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:14:39,819 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C40169%2C1689927277346.meta, suffix=.meta, logDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,40169,1689927277346, archiveDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/oldWALs, maxLogs=32 2023-07-21 08:14:39,844 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK] 2023-07-21 08:14:39,848 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK] 2023-07-21 08:14:39,855 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK] 2023-07-21 08:14:39,863 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,40169,1689927277346/jenkins-hbase5.apache.org%2C40169%2C1689927277346.meta.1689927279821.meta 2023-07-21 08:14:39,864 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK], DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK], DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK]] 2023-07-21 08:14:39,864 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:39,865 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 08:14:39,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 08:14:39,870 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 08:14:39,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 08:14:39,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:39,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 08:14:39,876 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 08:14:39,880 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 08:14:39,882 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/info 2023-07-21 08:14:39,882 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/info 2023-07-21 08:14:39,883 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 08:14:39,884 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:39,884 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 08:14:39,886 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/rep_barrier 2023-07-21 08:14:39,886 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/rep_barrier 2023-07-21 08:14:39,887 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 08:14:39,888 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:39,888 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 08:14:39,890 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/table 2023-07-21 08:14:39,890 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/table 2023-07-21 08:14:39,891 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 08:14:39,891 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:39,893 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740 2023-07-21 08:14:39,901 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740 2023-07-21 08:14:39,905 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 08:14:39,908 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 08:14:39,920 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11136402240, jitterRate=0.03715828061103821}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 08:14:39,921 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 08:14:39,943 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689927279781 2023-07-21 08:14:39,966 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 08:14:39,967 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 08:14:39,967 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase5.apache.org,40169,1689927277346, state=OPEN 2023-07-21 08:14:39,970 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 08:14:39,970 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 08:14:39,975 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 08:14:39,975 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase5.apache.org,40169,1689927277346 in 389 msec 2023-07-21 08:14:39,981 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 08:14:39,981 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 644 msec 2023-07-21 08:14:39,987 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.1910 sec 2023-07-21 08:14:39,987 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689927279987, completionTime=-1 2023-07-21 08:14:39,987 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 08:14:39,987 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 08:14:40,071 INFO [master/jenkins-hbase5:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 08:14:40,071 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689927340071 2023-07-21 08:14:40,071 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689927400071 2023-07-21 08:14:40,071 INFO [master/jenkins-hbase5:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 84 msec 2023-07-21 08:14:40,093 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,46585,1689927275104-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:40,093 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,46585,1689927275104-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:40,093 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,46585,1689927275104-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:40,095 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase5:46585, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:40,095 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:40,103 DEBUG [master/jenkins-hbase5:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 08:14:40,113 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 08:14:40,114 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 08:14:40,124 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 08:14:40,127 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:14:40,130 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:14:40,145 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:40,148 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d empty. 2023-07-21 08:14:40,149 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:40,149 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 08:14:40,196 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 08:14:40,198 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 569f7e45dedb500f02cd8d4eaf3e648d, NAME => 'hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:40,222 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:40,222 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 569f7e45dedb500f02cd8d4eaf3e648d, disabling compactions & flushes 2023-07-21 08:14:40,222 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:40,222 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:40,222 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. after waiting 0 ms 2023-07-21 08:14:40,222 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:40,223 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:40,223 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 569f7e45dedb500f02cd8d4eaf3e648d: 2023-07-21 08:14:40,223 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,46585,1689927275104] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:14:40,226 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,46585,1689927275104] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 08:14:40,229 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:14:40,229 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:14:40,232 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:14:40,235 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:40,236 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718 empty. 2023-07-21 08:14:40,237 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:40,237 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 08:14:40,251 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689927280233"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927280233"}]},"ts":"1689927280233"} 2023-07-21 08:14:40,280 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 08:14:40,283 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 60b7870db4b1a6e4be10ee407b45c718, NAME => 'hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:40,299 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:14:40,303 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:14:40,313 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:40,313 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 60b7870db4b1a6e4be10ee407b45c718, disabling compactions & flushes 2023-07-21 08:14:40,314 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:40,314 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:40,314 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. after waiting 0 ms 2023-07-21 08:14:40,314 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:40,314 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:40,314 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 60b7870db4b1a6e4be10ee407b45c718: 2023-07-21 08:14:40,315 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927280304"}]},"ts":"1689927280304"} 2023-07-21 08:14:40,319 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:14:40,320 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 08:14:40,321 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689927280321"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927280321"}]},"ts":"1689927280321"} 2023-07-21 08:14:40,326 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:40,326 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:40,326 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:40,326 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:14:40,326 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:40,328 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:14:40,328 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=569f7e45dedb500f02cd8d4eaf3e648d, ASSIGN}] 2023-07-21 08:14:40,330 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:14:40,330 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927280330"}]},"ts":"1689927280330"} 2023-07-21 08:14:40,332 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=569f7e45dedb500f02cd8d4eaf3e648d, ASSIGN 2023-07-21 08:14:40,335 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=569f7e45dedb500f02cd8d4eaf3e648d, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,37025,1689927277157; forceNewPlan=false, retain=false 2023-07-21 08:14:40,336 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 08:14:40,345 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:40,345 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:40,346 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:40,346 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:14:40,346 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:40,346 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=60b7870db4b1a6e4be10ee407b45c718, ASSIGN}] 2023-07-21 08:14:40,350 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=60b7870db4b1a6e4be10ee407b45c718, ASSIGN 2023-07-21 08:14:40,352 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=60b7870db4b1a6e4be10ee407b45c718, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,37025,1689927277157; forceNewPlan=false, retain=false 2023-07-21 08:14:40,353 INFO [jenkins-hbase5:46585] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-21 08:14:40,354 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=569f7e45dedb500f02cd8d4eaf3e648d, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:40,355 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=60b7870db4b1a6e4be10ee407b45c718, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:40,355 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689927280354"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927280354"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927280354"}]},"ts":"1689927280354"} 2023-07-21 08:14:40,355 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689927280355"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927280355"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927280355"}]},"ts":"1689927280355"} 2023-07-21 08:14:40,367 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 569f7e45dedb500f02cd8d4eaf3e648d, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:40,369 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 60b7870db4b1a6e4be10ee407b45c718, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:40,523 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:40,523 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:14:40,527 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:33992, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:14:40,536 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:40,536 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 569f7e45dedb500f02cd8d4eaf3e648d, NAME => 'hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:40,538 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:40,538 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:40,538 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:40,538 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:40,546 INFO [StoreOpener-569f7e45dedb500f02cd8d4eaf3e648d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:40,551 DEBUG [StoreOpener-569f7e45dedb500f02cd8d4eaf3e648d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d/info 2023-07-21 08:14:40,551 DEBUG [StoreOpener-569f7e45dedb500f02cd8d4eaf3e648d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d/info 2023-07-21 08:14:40,552 INFO [StoreOpener-569f7e45dedb500f02cd8d4eaf3e648d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 569f7e45dedb500f02cd8d4eaf3e648d columnFamilyName info 2023-07-21 08:14:40,553 INFO [StoreOpener-569f7e45dedb500f02cd8d4eaf3e648d-1] regionserver.HStore(310): Store=569f7e45dedb500f02cd8d4eaf3e648d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:40,554 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:40,556 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:40,560 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:40,565 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:40,566 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 569f7e45dedb500f02cd8d4eaf3e648d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10101146880, jitterRate=-0.0592573881149292}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:40,566 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 569f7e45dedb500f02cd8d4eaf3e648d: 2023-07-21 08:14:40,568 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d., pid=8, masterSystemTime=1689927280523 2023-07-21 08:14:40,573 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:40,574 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:40,574 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:40,574 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 60b7870db4b1a6e4be10ee407b45c718, NAME => 'hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:40,574 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 08:14:40,574 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. service=MultiRowMutationService 2023-07-21 08:14:40,575 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=569f7e45dedb500f02cd8d4eaf3e648d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:40,576 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689927280575"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927280575"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927280575"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927280575"}]},"ts":"1689927280575"} 2023-07-21 08:14:40,577 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 08:14:40,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:40,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:40,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:40,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:40,580 INFO [StoreOpener-60b7870db4b1a6e4be10ee407b45c718-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:40,583 DEBUG [StoreOpener-60b7870db4b1a6e4be10ee407b45c718-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/m 2023-07-21 08:14:40,583 DEBUG [StoreOpener-60b7870db4b1a6e4be10ee407b45c718-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/m 2023-07-21 08:14:40,584 INFO [StoreOpener-60b7870db4b1a6e4be10ee407b45c718-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 60b7870db4b1a6e4be10ee407b45c718 columnFamilyName m 2023-07-21 08:14:40,585 INFO [StoreOpener-60b7870db4b1a6e4be10ee407b45c718-1] regionserver.HStore(310): Store=60b7870db4b1a6e4be10ee407b45c718/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:40,586 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-21 08:14:40,589 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 569f7e45dedb500f02cd8d4eaf3e648d, server=jenkins-hbase5.apache.org,37025,1689927277157 in 213 msec 2023-07-21 08:14:40,589 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:40,591 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:40,595 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-21 08:14:40,595 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=569f7e45dedb500f02cd8d4eaf3e648d, ASSIGN in 261 msec 2023-07-21 08:14:40,598 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:14:40,598 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927280598"}]},"ts":"1689927280598"} 2023-07-21 08:14:40,599 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:40,601 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 08:14:40,604 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:40,605 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 60b7870db4b1a6e4be10ee407b45c718; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@73b36ee7, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:40,605 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 60b7870db4b1a6e4be10ee407b45c718: 2023-07-21 08:14:40,605 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:14:40,608 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718., pid=9, masterSystemTime=1689927280523 2023-07-21 08:14:40,611 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 491 msec 2023-07-21 08:14:40,612 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:40,612 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:40,613 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=60b7870db4b1a6e4be10ee407b45c718, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:40,613 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689927280612"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927280612"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927280612"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927280612"}]},"ts":"1689927280612"} 2023-07-21 08:14:40,622 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-21 08:14:40,622 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 60b7870db4b1a6e4be10ee407b45c718, server=jenkins-hbase5.apache.org,37025,1689927277157 in 247 msec 2023-07-21 08:14:40,626 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-21 08:14:40,626 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=60b7870db4b1a6e4be10ee407b45c718, ASSIGN in 276 msec 2023-07-21 08:14:40,628 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 08:14:40,630 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:14:40,630 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927280630"}]},"ts":"1689927280630"} 2023-07-21 08:14:40,631 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 08:14:40,631 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:14:40,633 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 08:14:40,640 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:14:40,645 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 416 msec 2023-07-21 08:14:40,659 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:14:40,663 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:34006, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:14:40,681 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 08:14:40,699 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 08:14:40,705 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 36 msec 2023-07-21 08:14:40,715 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 08:14:40,728 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 08:14:40,742 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 08:14:40,742 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 08:14:40,745 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 26 msec 2023-07-21 08:14:40,753 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 08:14:40,759 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 08:14:40,759 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.175sec 2023-07-21 08:14:40,762 INFO [master/jenkins-hbase5:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 08:14:40,763 INFO [master/jenkins-hbase5:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 08:14:40,763 INFO [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 08:14:40,765 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,46585,1689927275104-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 08:14:40,766 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,46585,1689927275104-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 08:14:40,805 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 08:14:40,820 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:14:40,821 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:40,823 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 08:14:40,831 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 08:14:40,851 DEBUG [Listener at localhost/43961] zookeeper.ReadOnlyZKClient(139): Connect 0x702c0ae8 to 127.0.0.1:59404 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:14:40,857 DEBUG [Listener at localhost/43961] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3147adb2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:14:40,880 DEBUG [hconnection-0x5a2c0b37-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:14:40,896 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:54474, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:14:40,909 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase5.apache.org,46585,1689927275104 2023-07-21 08:14:40,911 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:40,924 DEBUG [Listener at localhost/43961] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 08:14:40,929 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:57944, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 08:14:40,946 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 08:14:40,946 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:14:40,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(492): Client=jenkins//172.31.10.131 set balanceSwitch=false 2023-07-21 08:14:40,955 DEBUG [Listener at localhost/43961] zookeeper.ReadOnlyZKClient(139): Connect 0x43497e15 to 127.0.0.1:59404 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:14:40,982 DEBUG [Listener at localhost/43961] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2be10837, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:14:40,983 INFO [Listener at localhost/43961] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59404 2023-07-21 08:14:40,986 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:14:40,987 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101f28e9929000a connected 2023-07-21 08:14:41,044 INFO [Listener at localhost/43961] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=422, OpenFileDescriptor=677, MaxFileDescriptor=60000, SystemLoadAverage=517, ProcessCount=168, AvailableMemoryMB=3629 2023-07-21 08:14:41,047 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-21 08:14:41,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:41,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:41,139 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 08:14:41,154 INFO [Listener at localhost/43961] client.ConnectionUtils(127): regionserver/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:14:41,154 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:41,155 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:41,155 INFO [Listener at localhost/43961] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:14:41,155 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:14:41,155 INFO [Listener at localhost/43961] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:14:41,155 INFO [Listener at localhost/43961] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:14:41,159 INFO [Listener at localhost/43961] ipc.NettyRpcServer(120): Bind to /172.31.10.131:38059 2023-07-21 08:14:41,160 INFO [Listener at localhost/43961] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 08:14:41,165 DEBUG [Listener at localhost/43961] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 08:14:41,167 INFO [Listener at localhost/43961] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:14:41,172 INFO [Listener at localhost/43961] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:14:41,175 INFO [Listener at localhost/43961] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38059 connecting to ZooKeeper ensemble=127.0.0.1:59404 2023-07-21 08:14:41,184 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:380590x0, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:14:41,188 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(162): regionserver:380590x0, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 08:14:41,188 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38059-0x101f28e9929000b connected 2023-07-21 08:14:41,190 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(162): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 08:14:41,191 DEBUG [Listener at localhost/43961] zookeeper.ZKUtil(164): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:14:41,196 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38059 2023-07-21 08:14:41,200 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38059 2023-07-21 08:14:41,203 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38059 2023-07-21 08:14:41,204 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38059 2023-07-21 08:14:41,205 DEBUG [Listener at localhost/43961] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38059 2023-07-21 08:14:41,207 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:14:41,207 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:14:41,207 INFO [Listener at localhost/43961] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:14:41,208 INFO [Listener at localhost/43961] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 08:14:41,208 INFO [Listener at localhost/43961] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:14:41,208 INFO [Listener at localhost/43961] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:14:41,209 INFO [Listener at localhost/43961] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:14:41,209 INFO [Listener at localhost/43961] http.HttpServer(1146): Jetty bound to port 46337 2023-07-21 08:14:41,210 INFO [Listener at localhost/43961] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:14:41,214 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:41,214 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@537ec0a8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:14:41,214 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:41,214 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41ed43db{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:14:41,357 INFO [Listener at localhost/43961] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:14:41,359 INFO [Listener at localhost/43961] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:14:41,359 INFO [Listener at localhost/43961] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:14:41,360 INFO [Listener at localhost/43961] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 08:14:41,363 INFO [Listener at localhost/43961] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:14:41,366 INFO [Listener at localhost/43961] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1a482f9e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/java.io.tmpdir/jetty-0_0_0_0-46337-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1330318879016473718/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:14:41,368 INFO [Listener at localhost/43961] server.AbstractConnector(333): Started ServerConnector@614a5820{HTTP/1.1, (http/1.1)}{0.0.0.0:46337} 2023-07-21 08:14:41,368 INFO [Listener at localhost/43961] server.Server(415): Started @11581ms 2023-07-21 08:14:41,373 INFO [RS:3;jenkins-hbase5:38059] regionserver.HRegionServer(951): ClusterId : 2d38dc86-158b-47bb-a505-fbc95c81dab0 2023-07-21 08:14:41,374 DEBUG [RS:3;jenkins-hbase5:38059] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 08:14:41,377 DEBUG [RS:3;jenkins-hbase5:38059] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 08:14:41,377 DEBUG [RS:3;jenkins-hbase5:38059] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 08:14:41,380 DEBUG [RS:3;jenkins-hbase5:38059] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 08:14:41,388 DEBUG [RS:3;jenkins-hbase5:38059] zookeeper.ReadOnlyZKClient(139): Connect 0x0b2160ba to 127.0.0.1:59404 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:14:41,402 DEBUG [RS:3;jenkins-hbase5:38059] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e69fae, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:14:41,403 DEBUG [RS:3;jenkins-hbase5:38059] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1d90bca, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:14:41,417 DEBUG [RS:3;jenkins-hbase5:38059] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase5:38059 2023-07-21 08:14:41,417 INFO [RS:3;jenkins-hbase5:38059] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 08:14:41,417 INFO [RS:3;jenkins-hbase5:38059] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 08:14:41,417 DEBUG [RS:3;jenkins-hbase5:38059] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 08:14:41,418 INFO [RS:3;jenkins-hbase5:38059] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,46585,1689927275104 with isa=jenkins-hbase5.apache.org/172.31.10.131:38059, startcode=1689927281154 2023-07-21 08:14:41,418 DEBUG [RS:3;jenkins-hbase5:38059] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 08:14:41,426 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:51793, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 08:14:41,426 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46585] master.ServerManager(394): Registering regionserver=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:41,426 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:14:41,427 DEBUG [RS:3;jenkins-hbase5:38059] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b 2023-07-21 08:14:41,427 DEBUG [RS:3;jenkins-hbase5:38059] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40383 2023-07-21 08:14:41,427 DEBUG [RS:3;jenkins-hbase5:38059] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46407 2023-07-21 08:14:41,436 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:14:41,436 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:14:41,436 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:14:41,436 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:14:41,437 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:41,437 DEBUG [RS:3;jenkins-hbase5:38059] zookeeper.ZKUtil(162): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:41,437 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase5.apache.org,38059,1689927281154] 2023-07-21 08:14:41,437 WARN [RS:3;jenkins-hbase5:38059] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:14:41,438 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 08:14:41,438 INFO [RS:3;jenkins-hbase5:38059] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:14:41,438 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:41,438 DEBUG [RS:3;jenkins-hbase5:38059] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:41,438 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:41,438 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:41,450 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:41,451 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:41,451 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:41,451 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:41,452 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,46585,1689927275104] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 08:14:41,452 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:41,453 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:41,453 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:41,455 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:41,455 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:41,458 DEBUG [RS:3;jenkins-hbase5:38059] zookeeper.ZKUtil(162): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:41,459 DEBUG [RS:3;jenkins-hbase5:38059] zookeeper.ZKUtil(162): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:41,460 DEBUG [RS:3;jenkins-hbase5:38059] zookeeper.ZKUtil(162): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:41,461 DEBUG [RS:3;jenkins-hbase5:38059] zookeeper.ZKUtil(162): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:41,463 DEBUG [RS:3;jenkins-hbase5:38059] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 08:14:41,463 INFO [RS:3;jenkins-hbase5:38059] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 08:14:41,469 INFO [RS:3;jenkins-hbase5:38059] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 08:14:41,470 INFO [RS:3;jenkins-hbase5:38059] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 08:14:41,470 INFO [RS:3;jenkins-hbase5:38059] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:41,470 INFO [RS:3;jenkins-hbase5:38059] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 08:14:41,472 INFO [RS:3;jenkins-hbase5:38059] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:41,473 DEBUG [RS:3;jenkins-hbase5:38059] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:41,473 DEBUG [RS:3;jenkins-hbase5:38059] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:41,473 DEBUG [RS:3;jenkins-hbase5:38059] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:41,473 DEBUG [RS:3;jenkins-hbase5:38059] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:41,473 DEBUG [RS:3;jenkins-hbase5:38059] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:41,474 DEBUG [RS:3;jenkins-hbase5:38059] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:14:41,474 DEBUG [RS:3;jenkins-hbase5:38059] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:41,474 DEBUG [RS:3;jenkins-hbase5:38059] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:41,474 DEBUG [RS:3;jenkins-hbase5:38059] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:41,474 DEBUG [RS:3;jenkins-hbase5:38059] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:14:41,475 INFO [RS:3;jenkins-hbase5:38059] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:41,475 INFO [RS:3;jenkins-hbase5:38059] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:41,475 INFO [RS:3;jenkins-hbase5:38059] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:41,493 INFO [RS:3;jenkins-hbase5:38059] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 08:14:41,493 INFO [RS:3;jenkins-hbase5:38059] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,38059,1689927281154-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:14:41,505 INFO [RS:3;jenkins-hbase5:38059] regionserver.Replication(203): jenkins-hbase5.apache.org,38059,1689927281154 started 2023-07-21 08:14:41,505 INFO [RS:3;jenkins-hbase5:38059] regionserver.HRegionServer(1637): Serving as jenkins-hbase5.apache.org,38059,1689927281154, RpcServer on jenkins-hbase5.apache.org/172.31.10.131:38059, sessionid=0x101f28e9929000b 2023-07-21 08:14:41,506 DEBUG [RS:3;jenkins-hbase5:38059] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 08:14:41,506 DEBUG [RS:3;jenkins-hbase5:38059] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:41,506 DEBUG [RS:3;jenkins-hbase5:38059] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,38059,1689927281154' 2023-07-21 08:14:41,506 DEBUG [RS:3;jenkins-hbase5:38059] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 08:14:41,506 DEBUG [RS:3;jenkins-hbase5:38059] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 08:14:41,507 DEBUG [RS:3;jenkins-hbase5:38059] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 08:14:41,507 DEBUG [RS:3;jenkins-hbase5:38059] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 08:14:41,507 DEBUG [RS:3;jenkins-hbase5:38059] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:41,507 DEBUG [RS:3;jenkins-hbase5:38059] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,38059,1689927281154' 2023-07-21 08:14:41,507 DEBUG [RS:3;jenkins-hbase5:38059] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 08:14:41,508 DEBUG [RS:3;jenkins-hbase5:38059] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 08:14:41,508 DEBUG [RS:3;jenkins-hbase5:38059] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 08:14:41,509 INFO [RS:3;jenkins-hbase5:38059] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 08:14:41,509 INFO [RS:3;jenkins-hbase5:38059] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 08:14:41,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:14:41,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:41,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:41,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:14:41,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:41,527 DEBUG [hconnection-0x744a8a1-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:14:41,545 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:54490, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:14:41,552 DEBUG [hconnection-0x744a8a1-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:14:41,558 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:34008, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:14:41,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:41,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:41,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:14:41,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:41,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.10.131:57944 deadline: 1689928481573, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:14:41,575 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:14:41,578 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:41,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:41,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:41,580 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:14:41,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:41,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:41,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:41,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:41,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:41,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:41,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:41,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:41,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:41,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:41,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:41,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:41,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:37025] to rsgroup Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:41,621 INFO [RS:3;jenkins-hbase5:38059] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C38059%2C1689927281154, suffix=, logDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,38059,1689927281154, archiveDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/oldWALs, maxLogs=32 2023-07-21 08:14:41,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:41,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:41,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:41,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:41,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(238): Moving server region 60b7870db4b1a6e4be10ee407b45c718, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:41,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=60b7870db4b1a6e4be10ee407b45c718, REOPEN/MOVE 2023-07-21 08:14:41,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(238): Moving server region 569f7e45dedb500f02cd8d4eaf3e648d, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:41,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=569f7e45dedb500f02cd8d4eaf3e648d, REOPEN/MOVE 2023-07-21 08:14:41,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-21 08:14:41,660 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=60b7870db4b1a6e4be10ee407b45c718, REOPEN/MOVE 2023-07-21 08:14:41,663 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=569f7e45dedb500f02cd8d4eaf3e648d, REOPEN/MOVE 2023-07-21 08:14:41,665 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=60b7870db4b1a6e4be10ee407b45c718, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:41,665 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=569f7e45dedb500f02cd8d4eaf3e648d, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:41,665 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689927281665"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927281665"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927281665"}]},"ts":"1689927281665"} 2023-07-21 08:14:41,665 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689927281665"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927281665"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927281665"}]},"ts":"1689927281665"} 2023-07-21 08:14:41,679 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK] 2023-07-21 08:14:41,682 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 60b7870db4b1a6e4be10ee407b45c718, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:41,684 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK] 2023-07-21 08:14:41,685 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure 569f7e45dedb500f02cd8d4eaf3e648d, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:41,685 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK] 2023-07-21 08:14:41,693 INFO [RS:3;jenkins-hbase5:38059] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,38059,1689927281154/jenkins-hbase5.apache.org%2C38059%2C1689927281154.1689927281622 2023-07-21 08:14:41,693 DEBUG [RS:3;jenkins-hbase5:38059] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK], DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK], DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK]] 2023-07-21 08:14:41,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:41,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 569f7e45dedb500f02cd8d4eaf3e648d, disabling compactions & flushes 2023-07-21 08:14:41,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:41,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:41,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. after waiting 0 ms 2023-07-21 08:14:41,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:41,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2745): Flushing 569f7e45dedb500f02cd8d4eaf3e648d 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-21 08:14:41,934 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d/.tmp/info/b0d120539aea4c37a839bf27f2eec0c0 2023-07-21 08:14:41,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d/.tmp/info/b0d120539aea4c37a839bf27f2eec0c0 as hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d/info/b0d120539aea4c37a839bf27f2eec0c0 2023-07-21 08:14:41,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d/info/b0d120539aea4c37a839bf27f2eec0c0, entries=2, sequenceid=6, filesize=4.8 K 2023-07-21 08:14:42,000 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 569f7e45dedb500f02cd8d4eaf3e648d in 150ms, sequenceid=6, compaction requested=false 2023-07-21 08:14:42,002 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 08:14:42,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-21 08:14:42,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:42,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 569f7e45dedb500f02cd8d4eaf3e648d: 2023-07-21 08:14:42,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding 569f7e45dedb500f02cd8d4eaf3e648d move to jenkins-hbase5.apache.org,40889,1689927276956 record at close sequenceid=6 2023-07-21 08:14:42,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:42,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:42,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 60b7870db4b1a6e4be10ee407b45c718, disabling compactions & flushes 2023-07-21 08:14:42,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:42,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:42,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. after waiting 0 ms 2023-07-21 08:14:42,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:42,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2745): Flushing 60b7870db4b1a6e4be10ee407b45c718 1/1 column families, dataSize=1.38 KB heapSize=2.36 KB 2023-07-21 08:14:42,019 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=569f7e45dedb500f02cd8d4eaf3e648d, regionState=CLOSED 2023-07-21 08:14:42,019 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689927282019"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927282019"}]},"ts":"1689927282019"} 2023-07-21 08:14:42,025 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-21 08:14:42,025 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; CloseRegionProcedure 569f7e45dedb500f02cd8d4eaf3e648d, server=jenkins-hbase5.apache.org,37025,1689927277157 in 337 msec 2023-07-21 08:14:42,026 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=569f7e45dedb500f02cd8d4eaf3e648d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,40889,1689927276956; forceNewPlan=false, retain=false 2023-07-21 08:14:42,057 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/.tmp/m/40fe3ac2f72c4c82813f77dd5c14ceba 2023-07-21 08:14:42,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/.tmp/m/40fe3ac2f72c4c82813f77dd5c14ceba as hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/m/40fe3ac2f72c4c82813f77dd5c14ceba 2023-07-21 08:14:42,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/m/40fe3ac2f72c4c82813f77dd5c14ceba, entries=3, sequenceid=9, filesize=5.2 K 2023-07-21 08:14:42,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1414, heapSize ~2.34 KB/2400, currentSize=0 B/0 for 60b7870db4b1a6e4be10ee407b45c718 in 60ms, sequenceid=9, compaction requested=false 2023-07-21 08:14:42,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 08:14:42,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-21 08:14:42,087 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 08:14:42,088 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:42,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 60b7870db4b1a6e4be10ee407b45c718: 2023-07-21 08:14:42,088 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding 60b7870db4b1a6e4be10ee407b45c718 move to jenkins-hbase5.apache.org,40889,1689927276956 record at close sequenceid=9 2023-07-21 08:14:42,090 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:42,091 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=60b7870db4b1a6e4be10ee407b45c718, regionState=CLOSED 2023-07-21 08:14:42,091 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689927282091"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927282091"}]},"ts":"1689927282091"} 2023-07-21 08:14:42,096 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-21 08:14:42,096 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure 60b7870db4b1a6e4be10ee407b45c718, server=jenkins-hbase5.apache.org,37025,1689927277157 in 411 msec 2023-07-21 08:14:42,097 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=60b7870db4b1a6e4be10ee407b45c718, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,40889,1689927276956; forceNewPlan=false, retain=false 2023-07-21 08:14:42,098 INFO [jenkins-hbase5:46585] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-21 08:14:42,098 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=569f7e45dedb500f02cd8d4eaf3e648d, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:42,098 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689927282098"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927282098"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927282098"}]},"ts":"1689927282098"} 2023-07-21 08:14:42,099 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=60b7870db4b1a6e4be10ee407b45c718, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:42,099 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689927282099"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927282099"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927282099"}]},"ts":"1689927282099"} 2023-07-21 08:14:42,101 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 569f7e45dedb500f02cd8d4eaf3e648d, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:42,102 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=12, state=RUNNABLE; OpenRegionProcedure 60b7870db4b1a6e4be10ee407b45c718, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:42,254 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:42,254 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:14:42,260 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:56374, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:14:42,268 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:42,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 569f7e45dedb500f02cd8d4eaf3e648d, NAME => 'hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:42,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:42,269 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:42,269 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:42,269 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:42,276 INFO [StoreOpener-569f7e45dedb500f02cd8d4eaf3e648d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:42,277 DEBUG [StoreOpener-569f7e45dedb500f02cd8d4eaf3e648d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d/info 2023-07-21 08:14:42,278 DEBUG [StoreOpener-569f7e45dedb500f02cd8d4eaf3e648d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d/info 2023-07-21 08:14:42,278 INFO [StoreOpener-569f7e45dedb500f02cd8d4eaf3e648d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 569f7e45dedb500f02cd8d4eaf3e648d columnFamilyName info 2023-07-21 08:14:42,302 DEBUG [StoreOpener-569f7e45dedb500f02cd8d4eaf3e648d-1] regionserver.HStore(539): loaded hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d/info/b0d120539aea4c37a839bf27f2eec0c0 2023-07-21 08:14:42,303 INFO [StoreOpener-569f7e45dedb500f02cd8d4eaf3e648d-1] regionserver.HStore(310): Store=569f7e45dedb500f02cd8d4eaf3e648d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:42,304 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:42,307 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:42,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:14:42,314 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 569f7e45dedb500f02cd8d4eaf3e648d; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11330925280, jitterRate=0.05527465045452118}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:42,314 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 569f7e45dedb500f02cd8d4eaf3e648d: 2023-07-21 08:14:42,315 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d., pid=16, masterSystemTime=1689927282253 2023-07-21 08:14:42,319 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:42,320 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:14:42,320 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:42,320 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 60b7870db4b1a6e4be10ee407b45c718, NAME => 'hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:42,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 08:14:42,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. service=MultiRowMutationService 2023-07-21 08:14:42,321 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 08:14:42,321 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=569f7e45dedb500f02cd8d4eaf3e648d, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:42,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:42,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:42,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:42,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:42,321 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689927282321"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927282321"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927282321"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927282321"}]},"ts":"1689927282321"} 2023-07-21 08:14:42,328 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-21 08:14:42,328 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 569f7e45dedb500f02cd8d4eaf3e648d, server=jenkins-hbase5.apache.org,40889,1689927276956 in 224 msec 2023-07-21 08:14:42,330 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=569f7e45dedb500f02cd8d4eaf3e648d, REOPEN/MOVE in 676 msec 2023-07-21 08:14:42,333 INFO [StoreOpener-60b7870db4b1a6e4be10ee407b45c718-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:42,334 DEBUG [StoreOpener-60b7870db4b1a6e4be10ee407b45c718-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/m 2023-07-21 08:14:42,335 DEBUG [StoreOpener-60b7870db4b1a6e4be10ee407b45c718-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/m 2023-07-21 08:14:42,335 INFO [StoreOpener-60b7870db4b1a6e4be10ee407b45c718-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 60b7870db4b1a6e4be10ee407b45c718 columnFamilyName m 2023-07-21 08:14:42,349 DEBUG [StoreOpener-60b7870db4b1a6e4be10ee407b45c718-1] regionserver.HStore(539): loaded hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/m/40fe3ac2f72c4c82813f77dd5c14ceba 2023-07-21 08:14:42,349 INFO [StoreOpener-60b7870db4b1a6e4be10ee407b45c718-1] regionserver.HStore(310): Store=60b7870db4b1a6e4be10ee407b45c718/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:42,351 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:42,353 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:42,358 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:14:42,359 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 60b7870db4b1a6e4be10ee407b45c718; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@6826df10, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:42,359 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 60b7870db4b1a6e4be10ee407b45c718: 2023-07-21 08:14:42,361 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718., pid=17, masterSystemTime=1689927282253 2023-07-21 08:14:42,364 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:42,364 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:14:42,365 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=60b7870db4b1a6e4be10ee407b45c718, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:42,365 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689927282365"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927282365"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927282365"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927282365"}]},"ts":"1689927282365"} 2023-07-21 08:14:42,371 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=12 2023-07-21 08:14:42,371 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; OpenRegionProcedure 60b7870db4b1a6e4be10ee407b45c718, server=jenkins-hbase5.apache.org,40889,1689927276956 in 266 msec 2023-07-21 08:14:42,373 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=60b7870db4b1a6e4be10ee407b45c718, REOPEN/MOVE in 723 msec 2023-07-21 08:14:42,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-21 08:14:42,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,37025,1689927277157, jenkins-hbase5.apache.org,38059,1689927281154] are moved back to default 2023-07-21 08:14:42,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:42,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:42,657 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37025] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.10.131:34008 deadline: 1689927342657, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase5.apache.org port=40889 startCode=1689927276956. As of locationSeqNum=9. 2023-07-21 08:14:42,762 DEBUG [hconnection-0x744a8a1-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:14:42,765 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:56376, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:14:42,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:42,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:42,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:42,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:42,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$4(2112): Client=jenkins//172.31.10.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:14:42,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:42,799 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:14:42,802 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37025] ipc.CallRunner(144): callId: 46 service: ClientService methodName: ExecService size: 619 connection: 172.31.10.131:34006 deadline: 1689927342802, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase5.apache.org port=40889 startCode=1689927276956. As of locationSeqNum=9. 2023-07-21 08:14:42,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(700): Client=jenkins//172.31.10.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 18 2023-07-21 08:14:42,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-21 08:14:42,906 DEBUG [PEWorker-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:14:42,908 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:56392, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:14:42,911 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:42,912 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:42,912 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:42,913 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:42,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-21 08:14:42,918 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:14:42,924 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:42,924 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:42,924 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:42,924 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:42,924 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:42,925 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4 empty. 2023-07-21 08:14:42,925 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758 empty. 2023-07-21 08:14:42,925 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3 empty. 2023-07-21 08:14:42,926 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215 empty. 2023-07-21 08:14:42,926 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:42,926 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143 empty. 2023-07-21 08:14:42,926 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:42,926 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:42,926 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:42,927 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:42,927 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 08:14:42,952 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 08:14:42,954 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 353d89f36c601f7cb9c6cb8f8f6e2758, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:42,954 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => ca2193fcb1fbb15e2bf51790130f2ca4, NAME => 'Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:42,955 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 43600625f3243faa3f0875d7248f6143, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:43,005 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:43,005 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:43,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:43,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing ca2193fcb1fbb15e2bf51790130f2ca4, disabling compactions & flushes 2023-07-21 08:14:43,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 43600625f3243faa3f0875d7248f6143, disabling compactions & flushes 2023-07-21 08:14:43,007 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 353d89f36c601f7cb9c6cb8f8f6e2758, disabling compactions & flushes 2023-07-21 08:14:43,007 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:43,007 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:43,007 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. after waiting 0 ms 2023-07-21 08:14:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. after waiting 0 ms 2023-07-21 08:14:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:43,008 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:43,008 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for ca2193fcb1fbb15e2bf51790130f2ca4: 2023-07-21 08:14:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. after waiting 0 ms 2023-07-21 08:14:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 353d89f36c601f7cb9c6cb8f8f6e2758: 2023-07-21 08:14:43,008 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:43,008 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:43,009 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 43600625f3243faa3f0875d7248f6143: 2023-07-21 08:14:43,009 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => b643a71c68ab33e85282d17e3d2c2215, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:43,009 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => dd6e953dcd9091b099a55fca303acea3, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:43,035 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:43,036 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing b643a71c68ab33e85282d17e3d2c2215, disabling compactions & flushes 2023-07-21 08:14:43,036 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:43,036 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:43,036 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. after waiting 0 ms 2023-07-21 08:14:43,036 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:43,036 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:43,037 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for b643a71c68ab33e85282d17e3d2c2215: 2023-07-21 08:14:43,040 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:43,040 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing dd6e953dcd9091b099a55fca303acea3, disabling compactions & flushes 2023-07-21 08:14:43,040 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:43,040 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:43,040 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. after waiting 0 ms 2023-07-21 08:14:43,040 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:43,040 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:43,040 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for dd6e953dcd9091b099a55fca303acea3: 2023-07-21 08:14:43,044 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:14:43,045 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927283045"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927283045"}]},"ts":"1689927283045"} 2023-07-21 08:14:43,046 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927283045"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927283045"}]},"ts":"1689927283045"} 2023-07-21 08:14:43,046 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927283045"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927283045"}]},"ts":"1689927283045"} 2023-07-21 08:14:43,046 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927283045"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927283045"}]},"ts":"1689927283045"} 2023-07-21 08:14:43,046 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927283045"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927283045"}]},"ts":"1689927283045"} 2023-07-21 08:14:43,095 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 08:14:43,096 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:14:43,096 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927283096"}]},"ts":"1689927283096"} 2023-07-21 08:14:43,100 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-21 08:14:43,109 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:43,109 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:43,109 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:43,109 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:43,110 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2193fcb1fbb15e2bf51790130f2ca4, ASSIGN}, {pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=353d89f36c601f7cb9c6cb8f8f6e2758, ASSIGN}, {pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43600625f3243faa3f0875d7248f6143, ASSIGN}, {pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b643a71c68ab33e85282d17e3d2c2215, ASSIGN}, {pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd6e953dcd9091b099a55fca303acea3, ASSIGN}] 2023-07-21 08:14:43,113 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd6e953dcd9091b099a55fca303acea3, ASSIGN 2023-07-21 08:14:43,113 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b643a71c68ab33e85282d17e3d2c2215, ASSIGN 2023-07-21 08:14:43,114 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43600625f3243faa3f0875d7248f6143, ASSIGN 2023-07-21 08:14:43,114 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=353d89f36c601f7cb9c6cb8f8f6e2758, ASSIGN 2023-07-21 08:14:43,115 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2193fcb1fbb15e2bf51790130f2ca4, ASSIGN 2023-07-21 08:14:43,115 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd6e953dcd9091b099a55fca303acea3, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40889,1689927276956; forceNewPlan=false, retain=false 2023-07-21 08:14:43,115 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b643a71c68ab33e85282d17e3d2c2215, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40169,1689927277346; forceNewPlan=false, retain=false 2023-07-21 08:14:43,115 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=353d89f36c601f7cb9c6cb8f8f6e2758, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40169,1689927277346; forceNewPlan=false, retain=false 2023-07-21 08:14:43,115 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43600625f3243faa3f0875d7248f6143, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40889,1689927276956; forceNewPlan=false, retain=false 2023-07-21 08:14:43,117 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2193fcb1fbb15e2bf51790130f2ca4, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40889,1689927276956; forceNewPlan=false, retain=false 2023-07-21 08:14:43,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-21 08:14:43,266 INFO [jenkins-hbase5:46585] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 08:14:43,270 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=dd6e953dcd9091b099a55fca303acea3, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:43,270 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=b643a71c68ab33e85282d17e3d2c2215, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:43,270 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=43600625f3243faa3f0875d7248f6143, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:43,270 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=353d89f36c601f7cb9c6cb8f8f6e2758, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:43,270 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=ca2193fcb1fbb15e2bf51790130f2ca4, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:43,270 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927283270"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927283270"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927283270"}]},"ts":"1689927283270"} 2023-07-21 08:14:43,270 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927283270"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927283270"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927283270"}]},"ts":"1689927283270"} 2023-07-21 08:14:43,270 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927283270"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927283270"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927283270"}]},"ts":"1689927283270"} 2023-07-21 08:14:43,271 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927283270"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927283270"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927283270"}]},"ts":"1689927283270"} 2023-07-21 08:14:43,271 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927283270"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927283270"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927283270"}]},"ts":"1689927283270"} 2023-07-21 08:14:43,273 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=21, state=RUNNABLE; OpenRegionProcedure 43600625f3243faa3f0875d7248f6143, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:43,276 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=22, state=RUNNABLE; OpenRegionProcedure b643a71c68ab33e85282d17e3d2c2215, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:14:43,277 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=23, state=RUNNABLE; OpenRegionProcedure dd6e953dcd9091b099a55fca303acea3, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:43,278 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=19, state=RUNNABLE; OpenRegionProcedure ca2193fcb1fbb15e2bf51790130f2ca4, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:43,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=20, state=RUNNABLE; OpenRegionProcedure 353d89f36c601f7cb9c6cb8f8f6e2758, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:14:43,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-21 08:14:43,433 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:43,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 43600625f3243faa3f0875d7248f6143, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 08:14:43,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:43,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:43,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:43,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:43,436 INFO [StoreOpener-43600625f3243faa3f0875d7248f6143-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:43,438 DEBUG [StoreOpener-43600625f3243faa3f0875d7248f6143-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143/f 2023-07-21 08:14:43,438 DEBUG [StoreOpener-43600625f3243faa3f0875d7248f6143-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143/f 2023-07-21 08:14:43,439 INFO [StoreOpener-43600625f3243faa3f0875d7248f6143-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 43600625f3243faa3f0875d7248f6143 columnFamilyName f 2023-07-21 08:14:43,439 INFO [StoreOpener-43600625f3243faa3f0875d7248f6143-1] regionserver.HStore(310): Store=43600625f3243faa3f0875d7248f6143/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:43,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:43,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:43,444 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:43,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b643a71c68ab33e85282d17e3d2c2215, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 08:14:43,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:43,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:43,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:43,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:43,446 INFO [StoreOpener-b643a71c68ab33e85282d17e3d2c2215-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:43,448 DEBUG [StoreOpener-b643a71c68ab33e85282d17e3d2c2215-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215/f 2023-07-21 08:14:43,448 DEBUG [StoreOpener-b643a71c68ab33e85282d17e3d2c2215-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215/f 2023-07-21 08:14:43,448 INFO [StoreOpener-b643a71c68ab33e85282d17e3d2c2215-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b643a71c68ab33e85282d17e3d2c2215 columnFamilyName f 2023-07-21 08:14:43,449 INFO [StoreOpener-b643a71c68ab33e85282d17e3d2c2215-1] regionserver.HStore(310): Store=b643a71c68ab33e85282d17e3d2c2215/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:43,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:43,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:43,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:43,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:43,477 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:43,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:43,479 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened b643a71c68ab33e85282d17e3d2c2215; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10901430560, jitterRate=0.015274837613105774}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:43,479 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for b643a71c68ab33e85282d17e3d2c2215: 2023-07-21 08:14:43,479 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 43600625f3243faa3f0875d7248f6143; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10033196480, jitterRate=-0.06558576226234436}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:43,479 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 43600625f3243faa3f0875d7248f6143: 2023-07-21 08:14:43,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143., pid=24, masterSystemTime=1689927283428 2023-07-21 08:14:43,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215., pid=25, masterSystemTime=1689927283438 2023-07-21 08:14:43,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:43,483 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:43,483 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:43,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 353d89f36c601f7cb9c6cb8f8f6e2758, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 08:14:43,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:43,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:43,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:43,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:43,484 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=b643a71c68ab33e85282d17e3d2c2215, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:43,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:43,484 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:43,484 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927283484"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927283484"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927283484"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927283484"}]},"ts":"1689927283484"} 2023-07-21 08:14:43,484 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:43,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dd6e953dcd9091b099a55fca303acea3, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 08:14:43,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:43,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:43,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:43,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:43,485 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=43600625f3243faa3f0875d7248f6143, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:43,486 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927283485"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927283485"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927283485"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927283485"}]},"ts":"1689927283485"} 2023-07-21 08:14:43,487 INFO [StoreOpener-353d89f36c601f7cb9c6cb8f8f6e2758-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:43,488 INFO [StoreOpener-dd6e953dcd9091b099a55fca303acea3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:43,490 DEBUG [StoreOpener-353d89f36c601f7cb9c6cb8f8f6e2758-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758/f 2023-07-21 08:14:43,491 DEBUG [StoreOpener-353d89f36c601f7cb9c6cb8f8f6e2758-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758/f 2023-07-21 08:14:43,492 INFO [StoreOpener-353d89f36c601f7cb9c6cb8f8f6e2758-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 353d89f36c601f7cb9c6cb8f8f6e2758 columnFamilyName f 2023-07-21 08:14:43,492 DEBUG [StoreOpener-dd6e953dcd9091b099a55fca303acea3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3/f 2023-07-21 08:14:43,492 DEBUG [StoreOpener-dd6e953dcd9091b099a55fca303acea3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3/f 2023-07-21 08:14:43,492 INFO [StoreOpener-353d89f36c601f7cb9c6cb8f8f6e2758-1] regionserver.HStore(310): Store=353d89f36c601f7cb9c6cb8f8f6e2758/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:43,494 INFO [StoreOpener-dd6e953dcd9091b099a55fca303acea3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dd6e953dcd9091b099a55fca303acea3 columnFamilyName f 2023-07-21 08:14:43,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:43,496 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=22 2023-07-21 08:14:43,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=22, state=SUCCESS; OpenRegionProcedure b643a71c68ab33e85282d17e3d2c2215, server=jenkins-hbase5.apache.org,40169,1689927277346 in 213 msec 2023-07-21 08:14:43,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:43,501 INFO [StoreOpener-dd6e953dcd9091b099a55fca303acea3-1] regionserver.HStore(310): Store=dd6e953dcd9091b099a55fca303acea3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:43,501 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=21 2023-07-21 08:14:43,502 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=21, state=SUCCESS; OpenRegionProcedure 43600625f3243faa3f0875d7248f6143, server=jenkins-hbase5.apache.org,40889,1689927276956 in 218 msec 2023-07-21 08:14:43,503 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b643a71c68ab33e85282d17e3d2c2215, ASSIGN in 391 msec 2023-07-21 08:14:43,504 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43600625f3243faa3f0875d7248f6143, ASSIGN in 393 msec 2023-07-21 08:14:43,504 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:43,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:43,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:43,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:43,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:43,515 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 353d89f36c601f7cb9c6cb8f8f6e2758; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9714676960, jitterRate=-0.095250204205513}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:43,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 353d89f36c601f7cb9c6cb8f8f6e2758: 2023-07-21 08:14:43,516 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758., pid=28, masterSystemTime=1689927283438 2023-07-21 08:14:43,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:43,518 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened dd6e953dcd9091b099a55fca303acea3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10078144000, jitterRate=-0.06139969825744629}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:43,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for dd6e953dcd9091b099a55fca303acea3: 2023-07-21 08:14:43,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:43,518 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:43,519 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=353d89f36c601f7cb9c6cb8f8f6e2758, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:43,520 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927283519"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927283519"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927283519"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927283519"}]},"ts":"1689927283519"} 2023-07-21 08:14:43,520 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3., pid=26, masterSystemTime=1689927283428 2023-07-21 08:14:43,522 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:43,522 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:43,522 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:43,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ca2193fcb1fbb15e2bf51790130f2ca4, NAME => 'Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 08:14:43,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:43,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:43,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:43,523 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=dd6e953dcd9091b099a55fca303acea3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:43,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:43,526 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927283523"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927283523"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927283523"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927283523"}]},"ts":"1689927283523"} 2023-07-21 08:14:43,529 INFO [StoreOpener-ca2193fcb1fbb15e2bf51790130f2ca4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:43,530 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=20 2023-07-21 08:14:43,530 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=20, state=SUCCESS; OpenRegionProcedure 353d89f36c601f7cb9c6cb8f8f6e2758, server=jenkins-hbase5.apache.org,40169,1689927277346 in 240 msec 2023-07-21 08:14:43,533 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=353d89f36c601f7cb9c6cb8f8f6e2758, ASSIGN in 421 msec 2023-07-21 08:14:43,533 DEBUG [StoreOpener-ca2193fcb1fbb15e2bf51790130f2ca4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4/f 2023-07-21 08:14:43,533 DEBUG [StoreOpener-ca2193fcb1fbb15e2bf51790130f2ca4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4/f 2023-07-21 08:14:43,534 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=23 2023-07-21 08:14:43,534 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=23, state=SUCCESS; OpenRegionProcedure dd6e953dcd9091b099a55fca303acea3, server=jenkins-hbase5.apache.org,40889,1689927276956 in 253 msec 2023-07-21 08:14:43,535 INFO [StoreOpener-ca2193fcb1fbb15e2bf51790130f2ca4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ca2193fcb1fbb15e2bf51790130f2ca4 columnFamilyName f 2023-07-21 08:14:43,536 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd6e953dcd9091b099a55fca303acea3, ASSIGN in 425 msec 2023-07-21 08:14:43,536 INFO [StoreOpener-ca2193fcb1fbb15e2bf51790130f2ca4-1] regionserver.HStore(310): Store=ca2193fcb1fbb15e2bf51790130f2ca4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:43,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:43,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:43,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:43,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:43,545 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened ca2193fcb1fbb15e2bf51790130f2ca4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10703696480, jitterRate=-0.003140583634376526}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:43,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for ca2193fcb1fbb15e2bf51790130f2ca4: 2023-07-21 08:14:43,546 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4., pid=27, masterSystemTime=1689927283428 2023-07-21 08:14:43,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:43,548 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:43,549 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=ca2193fcb1fbb15e2bf51790130f2ca4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:43,549 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927283549"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927283549"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927283549"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927283549"}]},"ts":"1689927283549"} 2023-07-21 08:14:43,554 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=19 2023-07-21 08:14:43,554 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=19, state=SUCCESS; OpenRegionProcedure ca2193fcb1fbb15e2bf51790130f2ca4, server=jenkins-hbase5.apache.org,40889,1689927276956 in 273 msec 2023-07-21 08:14:43,557 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=18 2023-07-21 08:14:43,557 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2193fcb1fbb15e2bf51790130f2ca4, ASSIGN in 445 msec 2023-07-21 08:14:43,558 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:14:43,558 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927283558"}]},"ts":"1689927283558"} 2023-07-21 08:14:43,560 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-21 08:14:43,563 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:14:43,566 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 769 msec 2023-07-21 08:14:43,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-21 08:14:43,922 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 18 completed 2023-07-21 08:14:43,922 DEBUG [Listener at localhost/43961] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-21 08:14:43,923 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:43,930 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-21 08:14:43,930 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:43,931 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-21 08:14:43,931 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:43,937 DEBUG [Listener at localhost/43961] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:14:43,949 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:34020, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:14:43,962 DEBUG [Listener at localhost/43961] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:14:43,992 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:57456, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:14:43,994 DEBUG [Listener at localhost/43961] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:14:44,152 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:54504, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:14:44,156 DEBUG [Listener at localhost/43961] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:14:44,176 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:56402, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:14:44,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:44,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:14:44,193 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:44,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:44,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:44,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:44,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:44,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:44,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:44,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(345): Moving region ca2193fcb1fbb15e2bf51790130f2ca4 to RSGroup Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:44,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:44,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:44,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:44,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:14:44,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:44,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2193fcb1fbb15e2bf51790130f2ca4, REOPEN/MOVE 2023-07-21 08:14:44,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(345): Moving region 353d89f36c601f7cb9c6cb8f8f6e2758 to RSGroup Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:44,229 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2193fcb1fbb15e2bf51790130f2ca4, REOPEN/MOVE 2023-07-21 08:14:44,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:44,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:44,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:44,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:14:44,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:44,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=353d89f36c601f7cb9c6cb8f8f6e2758, REOPEN/MOVE 2023-07-21 08:14:44,242 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=ca2193fcb1fbb15e2bf51790130f2ca4, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:44,243 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=353d89f36c601f7cb9c6cb8f8f6e2758, REOPEN/MOVE 2023-07-21 08:14:44,243 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927284242"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927284242"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927284242"}]},"ts":"1689927284242"} 2023-07-21 08:14:44,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(345): Moving region 43600625f3243faa3f0875d7248f6143 to RSGroup Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:44,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:44,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:44,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:44,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:14:44,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:44,246 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=353d89f36c601f7cb9c6cb8f8f6e2758, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:44,246 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927284245"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927284245"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927284245"}]},"ts":"1689927284245"} 2023-07-21 08:14:44,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43600625f3243faa3f0875d7248f6143, REOPEN/MOVE 2023-07-21 08:14:44,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(345): Moving region b643a71c68ab33e85282d17e3d2c2215 to RSGroup Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:44,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:44,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:44,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:44,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:14:44,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:44,247 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=29, state=RUNNABLE; CloseRegionProcedure ca2193fcb1fbb15e2bf51790130f2ca4, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:44,250 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43600625f3243faa3f0875d7248f6143, REOPEN/MOVE 2023-07-21 08:14:44,251 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure 353d89f36c601f7cb9c6cb8f8f6e2758, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:14:44,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b643a71c68ab33e85282d17e3d2c2215, REOPEN/MOVE 2023-07-21 08:14:44,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(345): Moving region dd6e953dcd9091b099a55fca303acea3 to RSGroup Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:44,252 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b643a71c68ab33e85282d17e3d2c2215, REOPEN/MOVE 2023-07-21 08:14:44,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:44,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:44,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:44,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:14:44,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:44,254 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=43600625f3243faa3f0875d7248f6143, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:44,254 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927284254"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927284254"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927284254"}]},"ts":"1689927284254"} 2023-07-21 08:14:44,258 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b643a71c68ab33e85282d17e3d2c2215, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:44,258 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927284258"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927284258"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927284258"}]},"ts":"1689927284258"} 2023-07-21 08:14:44,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd6e953dcd9091b099a55fca303acea3, REOPEN/MOVE 2023-07-21 08:14:44,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_854977135, current retry=0 2023-07-21 08:14:44,261 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd6e953dcd9091b099a55fca303acea3, REOPEN/MOVE 2023-07-21 08:14:44,262 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=31, state=RUNNABLE; CloseRegionProcedure 43600625f3243faa3f0875d7248f6143, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:44,264 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=dd6e953dcd9091b099a55fca303acea3, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:44,264 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927284264"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927284264"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927284264"}]},"ts":"1689927284264"} 2023-07-21 08:14:44,264 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=32, state=RUNNABLE; CloseRegionProcedure b643a71c68ab33e85282d17e3d2c2215, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:14:44,271 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=35, state=RUNNABLE; CloseRegionProcedure dd6e953dcd9091b099a55fca303acea3, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:44,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:44,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing ca2193fcb1fbb15e2bf51790130f2ca4, disabling compactions & flushes 2023-07-21 08:14:44,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:44,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:44,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:44,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 353d89f36c601f7cb9c6cb8f8f6e2758, disabling compactions & flushes 2023-07-21 08:14:44,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. after waiting 0 ms 2023-07-21 08:14:44,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:44,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:44,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:44,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. after waiting 0 ms 2023-07-21 08:14:44,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:44,505 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:44,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:44,507 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:44,507 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:44,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 353d89f36c601f7cb9c6cb8f8f6e2758: 2023-07-21 08:14:44,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for ca2193fcb1fbb15e2bf51790130f2ca4: 2023-07-21 08:14:44,507 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding 353d89f36c601f7cb9c6cb8f8f6e2758 move to jenkins-hbase5.apache.org,37025,1689927277157 record at close sequenceid=2 2023-07-21 08:14:44,507 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding ca2193fcb1fbb15e2bf51790130f2ca4 move to jenkins-hbase5.apache.org,38059,1689927281154 record at close sequenceid=2 2023-07-21 08:14:44,521 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:44,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:44,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 43600625f3243faa3f0875d7248f6143, disabling compactions & flushes 2023-07-21 08:14:44,523 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:44,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:44,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. after waiting 0 ms 2023-07-21 08:14:44,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:44,525 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=ca2193fcb1fbb15e2bf51790130f2ca4, regionState=CLOSED 2023-07-21 08:14:44,525 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927284525"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927284525"}]},"ts":"1689927284525"} 2023-07-21 08:14:44,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:44,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:44,534 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=29 2023-07-21 08:14:44,534 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=29, state=SUCCESS; CloseRegionProcedure ca2193fcb1fbb15e2bf51790130f2ca4, server=jenkins-hbase5.apache.org,40889,1689927276956 in 280 msec 2023-07-21 08:14:44,541 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2193fcb1fbb15e2bf51790130f2ca4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,38059,1689927281154; forceNewPlan=false, retain=false 2023-07-21 08:14:44,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing b643a71c68ab33e85282d17e3d2c2215, disabling compactions & flushes 2023-07-21 08:14:44,542 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:44,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:44,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. after waiting 0 ms 2023-07-21 08:14:44,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:44,545 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=353d89f36c601f7cb9c6cb8f8f6e2758, regionState=CLOSED 2023-07-21 08:14:44,545 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927284545"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927284545"}]},"ts":"1689927284545"} 2023-07-21 08:14:44,551 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-21 08:14:44,551 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure 353d89f36c601f7cb9c6cb8f8f6e2758, server=jenkins-hbase5.apache.org,40169,1689927277346 in 296 msec 2023-07-21 08:14:44,553 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=353d89f36c601f7cb9c6cb8f8f6e2758, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,37025,1689927277157; forceNewPlan=false, retain=false 2023-07-21 08:14:44,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:44,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:44,562 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 43600625f3243faa3f0875d7248f6143: 2023-07-21 08:14:44,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding 43600625f3243faa3f0875d7248f6143 move to jenkins-hbase5.apache.org,37025,1689927277157 record at close sequenceid=2 2023-07-21 08:14:44,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:44,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:44,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing dd6e953dcd9091b099a55fca303acea3, disabling compactions & flushes 2023-07-21 08:14:44,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:44,577 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:44,577 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. after waiting 0 ms 2023-07-21 08:14:44,577 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:44,592 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=43600625f3243faa3f0875d7248f6143, regionState=CLOSED 2023-07-21 08:14:44,592 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927284566"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927284566"}]},"ts":"1689927284566"} 2023-07-21 08:14:44,602 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=31 2023-07-21 08:14:44,602 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=31, state=SUCCESS; CloseRegionProcedure 43600625f3243faa3f0875d7248f6143, server=jenkins-hbase5.apache.org,40889,1689927276956 in 333 msec 2023-07-21 08:14:44,603 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43600625f3243faa3f0875d7248f6143, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,37025,1689927277157; forceNewPlan=false, retain=false 2023-07-21 08:14:44,641 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:44,641 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:44,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:44,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for b643a71c68ab33e85282d17e3d2c2215: 2023-07-21 08:14:44,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding b643a71c68ab33e85282d17e3d2c2215 move to jenkins-hbase5.apache.org,37025,1689927277157 record at close sequenceid=2 2023-07-21 08:14:44,646 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:44,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for dd6e953dcd9091b099a55fca303acea3: 2023-07-21 08:14:44,646 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding dd6e953dcd9091b099a55fca303acea3 move to jenkins-hbase5.apache.org,38059,1689927281154 record at close sequenceid=2 2023-07-21 08:14:44,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:44,650 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b643a71c68ab33e85282d17e3d2c2215, regionState=CLOSED 2023-07-21 08:14:44,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:44,651 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927284650"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927284650"}]},"ts":"1689927284650"} 2023-07-21 08:14:44,652 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=dd6e953dcd9091b099a55fca303acea3, regionState=CLOSED 2023-07-21 08:14:44,653 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927284652"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927284652"}]},"ts":"1689927284652"} 2023-07-21 08:14:44,658 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=32 2023-07-21 08:14:44,658 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=32, state=SUCCESS; CloseRegionProcedure b643a71c68ab33e85282d17e3d2c2215, server=jenkins-hbase5.apache.org,40169,1689927277346 in 390 msec 2023-07-21 08:14:44,659 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b643a71c68ab33e85282d17e3d2c2215, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,37025,1689927277157; forceNewPlan=false, retain=false 2023-07-21 08:14:44,659 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=35 2023-07-21 08:14:44,659 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=35, state=SUCCESS; CloseRegionProcedure dd6e953dcd9091b099a55fca303acea3, server=jenkins-hbase5.apache.org,40889,1689927276956 in 384 msec 2023-07-21 08:14:44,660 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd6e953dcd9091b099a55fca303acea3, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,38059,1689927281154; forceNewPlan=false, retain=false 2023-07-21 08:14:44,691 INFO [jenkins-hbase5:46585] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 08:14:44,691 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=dd6e953dcd9091b099a55fca303acea3, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:44,692 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=43600625f3243faa3f0875d7248f6143, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:44,692 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=353d89f36c601f7cb9c6cb8f8f6e2758, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:44,692 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927284691"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927284691"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927284691"}]},"ts":"1689927284691"} 2023-07-21 08:14:44,692 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927284692"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927284692"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927284692"}]},"ts":"1689927284692"} 2023-07-21 08:14:44,691 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b643a71c68ab33e85282d17e3d2c2215, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:44,691 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=ca2193fcb1fbb15e2bf51790130f2ca4, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:44,692 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927284691"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927284691"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927284691"}]},"ts":"1689927284691"} 2023-07-21 08:14:44,692 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927284691"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927284691"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927284691"}]},"ts":"1689927284691"} 2023-07-21 08:14:44,692 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927284691"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927284691"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927284691"}]},"ts":"1689927284691"} 2023-07-21 08:14:44,695 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=31, state=RUNNABLE; OpenRegionProcedure 43600625f3243faa3f0875d7248f6143, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:44,698 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=30, state=RUNNABLE; OpenRegionProcedure 353d89f36c601f7cb9c6cb8f8f6e2758, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:44,702 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=32, state=RUNNABLE; OpenRegionProcedure b643a71c68ab33e85282d17e3d2c2215, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:44,704 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=29, state=RUNNABLE; OpenRegionProcedure ca2193fcb1fbb15e2bf51790130f2ca4, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:44,705 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=35, state=RUNNABLE; OpenRegionProcedure dd6e953dcd9091b099a55fca303acea3, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:44,854 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:44,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b643a71c68ab33e85282d17e3d2c2215, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 08:14:44,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:44,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:44,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:44,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:44,857 INFO [StoreOpener-b643a71c68ab33e85282d17e3d2c2215-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:44,858 DEBUG [StoreOpener-b643a71c68ab33e85282d17e3d2c2215-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215/f 2023-07-21 08:14:44,858 DEBUG [StoreOpener-b643a71c68ab33e85282d17e3d2c2215-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215/f 2023-07-21 08:14:44,858 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:44,858 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:14:44,859 INFO [StoreOpener-b643a71c68ab33e85282d17e3d2c2215-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b643a71c68ab33e85282d17e3d2c2215 columnFamilyName f 2023-07-21 08:14:44,859 INFO [StoreOpener-b643a71c68ab33e85282d17e3d2c2215-1] regionserver.HStore(310): Store=b643a71c68ab33e85282d17e3d2c2215/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:44,860 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:57462, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:14:44,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:44,864 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:44,864 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:44,864 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ca2193fcb1fbb15e2bf51790130f2ca4, NAME => 'Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 08:14:44,864 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:44,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:44,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:44,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:44,867 INFO [StoreOpener-ca2193fcb1fbb15e2bf51790130f2ca4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:44,868 DEBUG [StoreOpener-ca2193fcb1fbb15e2bf51790130f2ca4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4/f 2023-07-21 08:14:44,868 DEBUG [StoreOpener-ca2193fcb1fbb15e2bf51790130f2ca4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4/f 2023-07-21 08:14:44,869 INFO [StoreOpener-ca2193fcb1fbb15e2bf51790130f2ca4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ca2193fcb1fbb15e2bf51790130f2ca4 columnFamilyName f 2023-07-21 08:14:44,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:44,873 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened b643a71c68ab33e85282d17e3d2c2215; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10803896480, jitterRate=0.006191268563270569}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:44,873 INFO [StoreOpener-ca2193fcb1fbb15e2bf51790130f2ca4-1] regionserver.HStore(310): Store=ca2193fcb1fbb15e2bf51790130f2ca4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:44,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for b643a71c68ab33e85282d17e3d2c2215: 2023-07-21 08:14:44,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:44,876 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215., pid=41, masterSystemTime=1689927284849 2023-07-21 08:14:44,876 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:44,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:44,882 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:44,882 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:44,882 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:44,882 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened ca2193fcb1fbb15e2bf51790130f2ca4; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11766662400, jitterRate=0.09585583209991455}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:44,882 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 353d89f36c601f7cb9c6cb8f8f6e2758, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 08:14:44,882 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for ca2193fcb1fbb15e2bf51790130f2ca4: 2023-07-21 08:14:44,882 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b643a71c68ab33e85282d17e3d2c2215, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:44,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:44,883 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927284882"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927284882"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927284882"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927284882"}]},"ts":"1689927284882"} 2023-07-21 08:14:44,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:44,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:44,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:44,884 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4., pid=42, masterSystemTime=1689927284858 2023-07-21 08:14:44,891 INFO [StoreOpener-353d89f36c601f7cb9c6cb8f8f6e2758-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:44,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:44,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:44,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:44,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dd6e953dcd9091b099a55fca303acea3, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 08:14:44,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:44,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:44,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:44,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:44,905 DEBUG [StoreOpener-353d89f36c601f7cb9c6cb8f8f6e2758-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758/f 2023-07-21 08:14:44,905 DEBUG [StoreOpener-353d89f36c601f7cb9c6cb8f8f6e2758-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758/f 2023-07-21 08:14:44,905 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=ca2193fcb1fbb15e2bf51790130f2ca4, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:44,906 INFO [StoreOpener-353d89f36c601f7cb9c6cb8f8f6e2758-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 353d89f36c601f7cb9c6cb8f8f6e2758 columnFamilyName f 2023-07-21 08:14:44,906 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927284905"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927284905"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927284905"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927284905"}]},"ts":"1689927284905"} 2023-07-21 08:14:44,909 INFO [StoreOpener-353d89f36c601f7cb9c6cb8f8f6e2758-1] regionserver.HStore(310): Store=353d89f36c601f7cb9c6cb8f8f6e2758/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:44,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:44,912 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=32 2023-07-21 08:14:44,912 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=32, state=SUCCESS; OpenRegionProcedure b643a71c68ab33e85282d17e3d2c2215, server=jenkins-hbase5.apache.org,37025,1689927277157 in 185 msec 2023-07-21 08:14:44,913 INFO [StoreOpener-dd6e953dcd9091b099a55fca303acea3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:44,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:44,915 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=29 2023-07-21 08:14:44,919 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=29, state=SUCCESS; OpenRegionProcedure ca2193fcb1fbb15e2bf51790130f2ca4, server=jenkins-hbase5.apache.org,38059,1689927281154 in 205 msec 2023-07-21 08:14:44,919 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b643a71c68ab33e85282d17e3d2c2215, REOPEN/MOVE in 665 msec 2023-07-21 08:14:44,920 DEBUG [StoreOpener-dd6e953dcd9091b099a55fca303acea3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3/f 2023-07-21 08:14:44,920 DEBUG [StoreOpener-dd6e953dcd9091b099a55fca303acea3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3/f 2023-07-21 08:14:44,921 INFO [StoreOpener-dd6e953dcd9091b099a55fca303acea3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dd6e953dcd9091b099a55fca303acea3 columnFamilyName f 2023-07-21 08:14:44,921 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2193fcb1fbb15e2bf51790130f2ca4, REOPEN/MOVE in 692 msec 2023-07-21 08:14:44,922 INFO [StoreOpener-dd6e953dcd9091b099a55fca303acea3-1] regionserver.HStore(310): Store=dd6e953dcd9091b099a55fca303acea3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:44,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:44,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:44,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:44,925 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 353d89f36c601f7cb9c6cb8f8f6e2758; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11360458560, jitterRate=0.05802515149116516}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:44,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 353d89f36c601f7cb9c6cb8f8f6e2758: 2023-07-21 08:14:44,929 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758., pid=40, masterSystemTime=1689927284849 2023-07-21 08:14:44,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:44,932 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:44,933 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:44,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 43600625f3243faa3f0875d7248f6143, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 08:14:44,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:44,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:44,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:44,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:44,944 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=353d89f36c601f7cb9c6cb8f8f6e2758, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:44,944 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927284944"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927284944"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927284944"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927284944"}]},"ts":"1689927284944"} 2023-07-21 08:14:44,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:44,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened dd6e953dcd9091b099a55fca303acea3; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9825924480, jitterRate=-0.0848894715309143}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:44,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for dd6e953dcd9091b099a55fca303acea3: 2023-07-21 08:14:44,948 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3., pid=43, masterSystemTime=1689927284858 2023-07-21 08:14:44,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:44,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:44,951 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=30 2023-07-21 08:14:44,951 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=30, state=SUCCESS; OpenRegionProcedure 353d89f36c601f7cb9c6cb8f8f6e2758, server=jenkins-hbase5.apache.org,37025,1689927277157 in 249 msec 2023-07-21 08:14:44,951 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=dd6e953dcd9091b099a55fca303acea3, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:44,951 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927284951"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927284951"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927284951"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927284951"}]},"ts":"1689927284951"} 2023-07-21 08:14:44,953 INFO [StoreOpener-43600625f3243faa3f0875d7248f6143-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:44,958 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=353d89f36c601f7cb9c6cb8f8f6e2758, REOPEN/MOVE in 720 msec 2023-07-21 08:14:44,959 DEBUG [StoreOpener-43600625f3243faa3f0875d7248f6143-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143/f 2023-07-21 08:14:44,959 DEBUG [StoreOpener-43600625f3243faa3f0875d7248f6143-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143/f 2023-07-21 08:14:44,960 INFO [StoreOpener-43600625f3243faa3f0875d7248f6143-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 43600625f3243faa3f0875d7248f6143 columnFamilyName f 2023-07-21 08:14:44,961 INFO [StoreOpener-43600625f3243faa3f0875d7248f6143-1] regionserver.HStore(310): Store=43600625f3243faa3f0875d7248f6143/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:44,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:44,965 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=35 2023-07-21 08:14:44,965 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=35, state=SUCCESS; OpenRegionProcedure dd6e953dcd9091b099a55fca303acea3, server=jenkins-hbase5.apache.org,38059,1689927281154 in 254 msec 2023-07-21 08:14:44,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:44,967 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=35, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd6e953dcd9091b099a55fca303acea3, REOPEN/MOVE in 712 msec 2023-07-21 08:14:44,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:44,983 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 43600625f3243faa3f0875d7248f6143; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11832104160, jitterRate=0.10195057094097137}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:44,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 43600625f3243faa3f0875d7248f6143: 2023-07-21 08:14:44,984 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143., pid=39, masterSystemTime=1689927284849 2023-07-21 08:14:44,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:44,986 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:44,987 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=43600625f3243faa3f0875d7248f6143, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:44,987 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927284986"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927284986"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927284986"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927284986"}]},"ts":"1689927284986"} 2023-07-21 08:14:44,991 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=31 2023-07-21 08:14:44,991 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=31, state=SUCCESS; OpenRegionProcedure 43600625f3243faa3f0875d7248f6143, server=jenkins-hbase5.apache.org,37025,1689927277157 in 294 msec 2023-07-21 08:14:44,992 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43600625f3243faa3f0875d7248f6143, REOPEN/MOVE in 747 msec 2023-07-21 08:14:45,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure.ProcedureSyncWait(216): waitFor pid=29 2023-07-21 08:14:45,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_854977135. 2023-07-21 08:14:45,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:45,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:45,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:45,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:45,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:14:45,271 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:45,279 INFO [Listener at localhost/43961] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:45,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$11(2418): Client=jenkins//172.31.10.131 disable Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:45,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=44, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:45,300 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927285300"}]},"ts":"1689927285300"} 2023-07-21 08:14:45,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-21 08:14:45,302 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-21 08:14:45,305 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-21 08:14:45,311 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2193fcb1fbb15e2bf51790130f2ca4, UNASSIGN}, {pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=353d89f36c601f7cb9c6cb8f8f6e2758, UNASSIGN}, {pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43600625f3243faa3f0875d7248f6143, UNASSIGN}, {pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b643a71c68ab33e85282d17e3d2c2215, UNASSIGN}, {pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd6e953dcd9091b099a55fca303acea3, UNASSIGN}] 2023-07-21 08:14:45,313 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b643a71c68ab33e85282d17e3d2c2215, UNASSIGN 2023-07-21 08:14:45,314 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2193fcb1fbb15e2bf51790130f2ca4, UNASSIGN 2023-07-21 08:14:45,314 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd6e953dcd9091b099a55fca303acea3, UNASSIGN 2023-07-21 08:14:45,314 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43600625f3243faa3f0875d7248f6143, UNASSIGN 2023-07-21 08:14:45,314 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=353d89f36c601f7cb9c6cb8f8f6e2758, UNASSIGN 2023-07-21 08:14:45,316 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=43600625f3243faa3f0875d7248f6143, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:45,316 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=dd6e953dcd9091b099a55fca303acea3, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:45,316 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=ca2193fcb1fbb15e2bf51790130f2ca4, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:45,317 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927285316"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927285316"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927285316"}]},"ts":"1689927285316"} 2023-07-21 08:14:45,317 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927285316"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927285316"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927285316"}]},"ts":"1689927285316"} 2023-07-21 08:14:45,316 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927285316"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927285316"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927285316"}]},"ts":"1689927285316"} 2023-07-21 08:14:45,317 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=b643a71c68ab33e85282d17e3d2c2215, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:45,317 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927285317"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927285317"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927285317"}]},"ts":"1689927285317"} 2023-07-21 08:14:45,317 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=353d89f36c601f7cb9c6cb8f8f6e2758, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:45,317 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927285317"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927285317"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927285317"}]},"ts":"1689927285317"} 2023-07-21 08:14:45,319 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=49, state=RUNNABLE; CloseRegionProcedure dd6e953dcd9091b099a55fca303acea3, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:45,321 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=45, state=RUNNABLE; CloseRegionProcedure ca2193fcb1fbb15e2bf51790130f2ca4, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:45,322 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=52, ppid=47, state=RUNNABLE; CloseRegionProcedure 43600625f3243faa3f0875d7248f6143, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:45,324 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=48, state=RUNNABLE; CloseRegionProcedure b643a71c68ab33e85282d17e3d2c2215, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:45,325 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=46, state=RUNNABLE; CloseRegionProcedure 353d89f36c601f7cb9c6cb8f8f6e2758, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:45,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-21 08:14:45,473 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:45,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing ca2193fcb1fbb15e2bf51790130f2ca4, disabling compactions & flushes 2023-07-21 08:14:45,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:45,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:45,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. after waiting 0 ms 2023-07-21 08:14:45,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:45,476 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:45,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 43600625f3243faa3f0875d7248f6143, disabling compactions & flushes 2023-07-21 08:14:45,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:45,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:45,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. after waiting 0 ms 2023-07-21 08:14:45,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:45,484 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 08:14:45,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 08:14:45,491 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143. 2023-07-21 08:14:45,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 43600625f3243faa3f0875d7248f6143: 2023-07-21 08:14:45,492 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 08:14:45,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4. 2023-07-21 08:14:45,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for ca2193fcb1fbb15e2bf51790130f2ca4: 2023-07-21 08:14:45,494 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:45,494 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:45,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing b643a71c68ab33e85282d17e3d2c2215, disabling compactions & flushes 2023-07-21 08:14:45,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:45,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:45,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. after waiting 0 ms 2023-07-21 08:14:45,496 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:45,496 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=43600625f3243faa3f0875d7248f6143, regionState=CLOSED 2023-07-21 08:14:45,496 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927285496"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927285496"}]},"ts":"1689927285496"} 2023-07-21 08:14:45,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:45,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:45,498 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing dd6e953dcd9091b099a55fca303acea3, disabling compactions & flushes 2023-07-21 08:14:45,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:45,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:45,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. after waiting 0 ms 2023-07-21 08:14:45,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:45,499 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=ca2193fcb1fbb15e2bf51790130f2ca4, regionState=CLOSED 2023-07-21 08:14:45,500 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927285499"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927285499"}]},"ts":"1689927285499"} 2023-07-21 08:14:45,505 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=47 2023-07-21 08:14:45,505 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=47, state=SUCCESS; CloseRegionProcedure 43600625f3243faa3f0875d7248f6143, server=jenkins-hbase5.apache.org,37025,1689927277157 in 178 msec 2023-07-21 08:14:45,509 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=45 2023-07-21 08:14:45,509 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=45, state=SUCCESS; CloseRegionProcedure ca2193fcb1fbb15e2bf51790130f2ca4, server=jenkins-hbase5.apache.org,38059,1689927281154 in 181 msec 2023-07-21 08:14:45,510 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43600625f3243faa3f0875d7248f6143, UNASSIGN in 198 msec 2023-07-21 08:14:45,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 08:14:45,511 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2193fcb1fbb15e2bf51790130f2ca4, UNASSIGN in 202 msec 2023-07-21 08:14:45,514 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215. 2023-07-21 08:14:45,514 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for b643a71c68ab33e85282d17e3d2c2215: 2023-07-21 08:14:45,523 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:45,523 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:45,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 353d89f36c601f7cb9c6cb8f8f6e2758, disabling compactions & flushes 2023-07-21 08:14:45,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:45,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:45,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. after waiting 0 ms 2023-07-21 08:14:45,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:45,527 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=b643a71c68ab33e85282d17e3d2c2215, regionState=CLOSED 2023-07-21 08:14:45,527 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927285527"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927285527"}]},"ts":"1689927285527"} 2023-07-21 08:14:45,532 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=48 2023-07-21 08:14:45,532 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=48, state=SUCCESS; CloseRegionProcedure b643a71c68ab33e85282d17e3d2c2215, server=jenkins-hbase5.apache.org,37025,1689927277157 in 205 msec 2023-07-21 08:14:45,534 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b643a71c68ab33e85282d17e3d2c2215, UNASSIGN in 225 msec 2023-07-21 08:14:45,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 08:14:45,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3. 2023-07-21 08:14:45,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for dd6e953dcd9091b099a55fca303acea3: 2023-07-21 08:14:45,545 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=dd6e953dcd9091b099a55fca303acea3, regionState=CLOSED 2023-07-21 08:14:45,545 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927285545"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927285545"}]},"ts":"1689927285545"} 2023-07-21 08:14:45,547 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:45,550 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=49 2023-07-21 08:14:45,550 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=49, state=SUCCESS; CloseRegionProcedure dd6e953dcd9091b099a55fca303acea3, server=jenkins-hbase5.apache.org,38059,1689927281154 in 229 msec 2023-07-21 08:14:45,552 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dd6e953dcd9091b099a55fca303acea3, UNASSIGN in 243 msec 2023-07-21 08:14:45,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 08:14:45,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758. 2023-07-21 08:14:45,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 353d89f36c601f7cb9c6cb8f8f6e2758: 2023-07-21 08:14:45,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:45,574 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=353d89f36c601f7cb9c6cb8f8f6e2758, regionState=CLOSED 2023-07-21 08:14:45,574 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927285574"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927285574"}]},"ts":"1689927285574"} 2023-07-21 08:14:45,579 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=46 2023-07-21 08:14:45,580 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=46, state=SUCCESS; CloseRegionProcedure 353d89f36c601f7cb9c6cb8f8f6e2758, server=jenkins-hbase5.apache.org,37025,1689927277157 in 252 msec 2023-07-21 08:14:45,583 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=44 2023-07-21 08:14:45,583 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=353d89f36c601f7cb9c6cb8f8f6e2758, UNASSIGN in 273 msec 2023-07-21 08:14:45,585 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927285584"}]},"ts":"1689927285584"} 2023-07-21 08:14:45,588 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-21 08:14:45,592 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-21 08:14:45,595 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 306 msec 2023-07-21 08:14:45,601 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 08:14:45,603 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 08:14:45,603 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-21 08:14:45,604 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 08:14:45,604 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-21 08:14:45,604 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 08:14:45,604 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-21 08:14:45,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-21 08:14:45,607 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 44 completed 2023-07-21 08:14:45,609 INFO [Listener at localhost/43961] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:45,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$6(2260): Client=jenkins//172.31.10.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:45,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=55, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-21 08:14:45,628 DEBUG [PEWorker-3] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-21 08:14:45,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-21 08:14:45,643 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:45,643 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:45,643 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:45,643 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:45,643 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:45,648 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758/recovered.edits] 2023-07-21 08:14:45,649 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143/recovered.edits] 2023-07-21 08:14:45,649 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3/recovered.edits] 2023-07-21 08:14:45,649 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4/recovered.edits] 2023-07-21 08:14:45,649 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215/recovered.edits] 2023-07-21 08:14:45,665 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3/recovered.edits/7.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3/recovered.edits/7.seqid 2023-07-21 08:14:45,666 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758/recovered.edits/7.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758/recovered.edits/7.seqid 2023-07-21 08:14:45,666 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215/recovered.edits/7.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215/recovered.edits/7.seqid 2023-07-21 08:14:45,666 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143/recovered.edits/7.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143/recovered.edits/7.seqid 2023-07-21 08:14:45,667 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dd6e953dcd9091b099a55fca303acea3 2023-07-21 08:14:45,667 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/353d89f36c601f7cb9c6cb8f8f6e2758 2023-07-21 08:14:45,667 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b643a71c68ab33e85282d17e3d2c2215 2023-07-21 08:14:45,668 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43600625f3243faa3f0875d7248f6143 2023-07-21 08:14:45,669 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4/recovered.edits/7.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4/recovered.edits/7.seqid 2023-07-21 08:14:45,669 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2193fcb1fbb15e2bf51790130f2ca4 2023-07-21 08:14:45,670 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 08:14:45,703 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-21 08:14:45,710 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-21 08:14:45,710 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-21 08:14:45,711 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927285711"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:45,711 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927285711"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:45,711 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927285711"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:45,711 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927285711"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:45,711 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927285711"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:45,724 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 08:14:45,724 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ca2193fcb1fbb15e2bf51790130f2ca4, NAME => 'Group_testTableMoveTruncateAndDrop,,1689927282792.ca2193fcb1fbb15e2bf51790130f2ca4.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 353d89f36c601f7cb9c6cb8f8f6e2758, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689927282792.353d89f36c601f7cb9c6cb8f8f6e2758.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 43600625f3243faa3f0875d7248f6143, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927282792.43600625f3243faa3f0875d7248f6143.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => b643a71c68ab33e85282d17e3d2c2215, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927282792.b643a71c68ab33e85282d17e3d2c2215.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => dd6e953dcd9091b099a55fca303acea3, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689927282792.dd6e953dcd9091b099a55fca303acea3.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 08:14:45,724 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-21 08:14:45,724 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689927285724"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:45,730 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-21 08:14:45,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-21 08:14:45,742 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af 2023-07-21 08:14:45,742 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1 2023-07-21 08:14:45,742 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1 2023-07-21 08:14:45,742 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3 2023-07-21 08:14:45,742 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89 2023-07-21 08:14:45,743 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af empty. 2023-07-21 08:14:45,743 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1 empty. 2023-07-21 08:14:45,743 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1 empty. 2023-07-21 08:14:45,743 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af 2023-07-21 08:14:45,743 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89 empty. 2023-07-21 08:14:45,744 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1 2023-07-21 08:14:45,744 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3 empty. 2023-07-21 08:14:45,744 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89 2023-07-21 08:14:45,744 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1 2023-07-21 08:14:45,744 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3 2023-07-21 08:14:45,744 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 08:14:45,777 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 08:14:45,783 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 24526adaf7248a326ea24354f69f7a89, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:45,783 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4d6bdfb8ad9a64373da3988f83bbb9af, NAME => 'Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:45,788 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => ff0a2ee544a7097ab5ee60d8eb440ab3, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:45,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:45,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 24526adaf7248a326ea24354f69f7a89, disabling compactions & flushes 2023-07-21 08:14:45,831 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89. 2023-07-21 08:14:45,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89. 2023-07-21 08:14:45,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89. after waiting 0 ms 2023-07-21 08:14:45,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89. 2023-07-21 08:14:45,831 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89. 2023-07-21 08:14:45,831 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 24526adaf7248a326ea24354f69f7a89: 2023-07-21 08:14:45,832 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => f1dd0f0b6b6075f2e82606d4c0e4abf1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:45,833 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:45,833 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 4d6bdfb8ad9a64373da3988f83bbb9af, disabling compactions & flushes 2023-07-21 08:14:45,833 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af. 2023-07-21 08:14:45,833 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af. 2023-07-21 08:14:45,833 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af. after waiting 0 ms 2023-07-21 08:14:45,833 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af. 2023-07-21 08:14:45,834 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af. 2023-07-21 08:14:45,834 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 4d6bdfb8ad9a64373da3988f83bbb9af: 2023-07-21 08:14:45,834 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5ee93947c3001cb5cb982fba3369b7d1, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:45,852 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:45,852 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing f1dd0f0b6b6075f2e82606d4c0e4abf1, disabling compactions & flushes 2023-07-21 08:14:45,852 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1. 2023-07-21 08:14:45,852 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1. 2023-07-21 08:14:45,852 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1. after waiting 0 ms 2023-07-21 08:14:45,852 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1. 2023-07-21 08:14:45,852 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1. 2023-07-21 08:14:45,852 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for f1dd0f0b6b6075f2e82606d4c0e4abf1: 2023-07-21 08:14:45,857 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:45,858 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 5ee93947c3001cb5cb982fba3369b7d1, disabling compactions & flushes 2023-07-21 08:14:45,858 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1. 2023-07-21 08:14:45,858 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1. 2023-07-21 08:14:45,858 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1. after waiting 0 ms 2023-07-21 08:14:45,858 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1. 2023-07-21 08:14:45,858 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1. 2023-07-21 08:14:45,858 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 5ee93947c3001cb5cb982fba3369b7d1: 2023-07-21 08:14:45,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-21 08:14:46,233 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:46,234 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing ff0a2ee544a7097ab5ee60d8eb440ab3, disabling compactions & flushes 2023-07-21 08:14:46,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-21 08:14:46,234 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3. 2023-07-21 08:14:46,234 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3. 2023-07-21 08:14:46,234 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3. after waiting 0 ms 2023-07-21 08:14:46,234 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3. 2023-07-21 08:14:46,234 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3. 2023-07-21 08:14:46,234 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for ff0a2ee544a7097ab5ee60d8eb440ab3: 2023-07-21 08:14:46,238 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286238"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927286238"}]},"ts":"1689927286238"} 2023-07-21 08:14:46,238 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927286238"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927286238"}]},"ts":"1689927286238"} 2023-07-21 08:14:46,238 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286238"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927286238"}]},"ts":"1689927286238"} 2023-07-21 08:14:46,238 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927286238"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927286238"}]},"ts":"1689927286238"} 2023-07-21 08:14:46,239 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286238"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927286238"}]},"ts":"1689927286238"} 2023-07-21 08:14:46,241 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 08:14:46,242 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927286242"}]},"ts":"1689927286242"} 2023-07-21 08:14:46,244 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-21 08:14:46,250 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:46,250 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:46,250 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:46,250 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:46,251 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d6bdfb8ad9a64373da3988f83bbb9af, ASSIGN}, {pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=24526adaf7248a326ea24354f69f7a89, ASSIGN}, {pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff0a2ee544a7097ab5ee60d8eb440ab3, ASSIGN}, {pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f1dd0f0b6b6075f2e82606d4c0e4abf1, ASSIGN}, {pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ee93947c3001cb5cb982fba3369b7d1, ASSIGN}] 2023-07-21 08:14:46,253 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ee93947c3001cb5cb982fba3369b7d1, ASSIGN 2023-07-21 08:14:46,253 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=24526adaf7248a326ea24354f69f7a89, ASSIGN 2023-07-21 08:14:46,253 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff0a2ee544a7097ab5ee60d8eb440ab3, ASSIGN 2023-07-21 08:14:46,254 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f1dd0f0b6b6075f2e82606d4c0e4abf1, ASSIGN 2023-07-21 08:14:46,254 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d6bdfb8ad9a64373da3988f83bbb9af, ASSIGN 2023-07-21 08:14:46,255 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ee93947c3001cb5cb982fba3369b7d1, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,38059,1689927281154; forceNewPlan=false, retain=false 2023-07-21 08:14:46,255 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=24526adaf7248a326ea24354f69f7a89, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,37025,1689927277157; forceNewPlan=false, retain=false 2023-07-21 08:14:46,255 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff0a2ee544a7097ab5ee60d8eb440ab3, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,37025,1689927277157; forceNewPlan=false, retain=false 2023-07-21 08:14:46,255 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d6bdfb8ad9a64373da3988f83bbb9af, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,38059,1689927281154; forceNewPlan=false, retain=false 2023-07-21 08:14:46,255 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f1dd0f0b6b6075f2e82606d4c0e4abf1, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,37025,1689927277157; forceNewPlan=false, retain=false 2023-07-21 08:14:46,405 INFO [jenkins-hbase5:46585] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 08:14:46,409 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=5ee93947c3001cb5cb982fba3369b7d1, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:46,409 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=ff0a2ee544a7097ab5ee60d8eb440ab3, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:46,409 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=4d6bdfb8ad9a64373da3988f83bbb9af, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:46,409 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=f1dd0f0b6b6075f2e82606d4c0e4abf1, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:46,409 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927286409"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927286409"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927286409"}]},"ts":"1689927286409"} 2023-07-21 08:14:46,409 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286409"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927286409"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927286409"}]},"ts":"1689927286409"} 2023-07-21 08:14:46,409 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927286409"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927286409"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927286409"}]},"ts":"1689927286409"} 2023-07-21 08:14:46,409 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286409"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927286409"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927286409"}]},"ts":"1689927286409"} 2023-07-21 08:14:46,409 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=24526adaf7248a326ea24354f69f7a89, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:46,410 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286409"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927286409"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927286409"}]},"ts":"1689927286409"} 2023-07-21 08:14:46,412 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE; OpenRegionProcedure 5ee93947c3001cb5cb982fba3369b7d1, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:46,414 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=56, state=RUNNABLE; OpenRegionProcedure 4d6bdfb8ad9a64373da3988f83bbb9af, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:46,417 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=59, state=RUNNABLE; OpenRegionProcedure f1dd0f0b6b6075f2e82606d4c0e4abf1, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:46,419 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=58, state=RUNNABLE; OpenRegionProcedure ff0a2ee544a7097ab5ee60d8eb440ab3, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:46,427 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=57, state=RUNNABLE; OpenRegionProcedure 24526adaf7248a326ea24354f69f7a89, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:46,572 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af. 2023-07-21 08:14:46,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4d6bdfb8ad9a64373da3988f83bbb9af, NAME => 'Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 08:14:46,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4d6bdfb8ad9a64373da3988f83bbb9af 2023-07-21 08:14:46,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:46,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 4d6bdfb8ad9a64373da3988f83bbb9af 2023-07-21 08:14:46,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 4d6bdfb8ad9a64373da3988f83bbb9af 2023-07-21 08:14:46,575 INFO [StoreOpener-4d6bdfb8ad9a64373da3988f83bbb9af-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4d6bdfb8ad9a64373da3988f83bbb9af 2023-07-21 08:14:46,577 DEBUG [StoreOpener-4d6bdfb8ad9a64373da3988f83bbb9af-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af/f 2023-07-21 08:14:46,577 DEBUG [StoreOpener-4d6bdfb8ad9a64373da3988f83bbb9af-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af/f 2023-07-21 08:14:46,577 INFO [StoreOpener-4d6bdfb8ad9a64373da3988f83bbb9af-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4d6bdfb8ad9a64373da3988f83bbb9af columnFamilyName f 2023-07-21 08:14:46,578 INFO [StoreOpener-4d6bdfb8ad9a64373da3988f83bbb9af-1] regionserver.HStore(310): Store=4d6bdfb8ad9a64373da3988f83bbb9af/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:46,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af 2023-07-21 08:14:46,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af 2023-07-21 08:14:46,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 4d6bdfb8ad9a64373da3988f83bbb9af 2023-07-21 08:14:46,587 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3. 2023-07-21 08:14:46,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ff0a2ee544a7097ab5ee60d8eb440ab3, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 08:14:46,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ff0a2ee544a7097ab5ee60d8eb440ab3 2023-07-21 08:14:46,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:46,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for ff0a2ee544a7097ab5ee60d8eb440ab3 2023-07-21 08:14:46,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for ff0a2ee544a7097ab5ee60d8eb440ab3 2023-07-21 08:14:46,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:46,589 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 4d6bdfb8ad9a64373da3988f83bbb9af; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10271336800, jitterRate=-0.04340721666812897}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:46,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 4d6bdfb8ad9a64373da3988f83bbb9af: 2023-07-21 08:14:46,590 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af., pid=62, masterSystemTime=1689927286568 2023-07-21 08:14:46,591 INFO [StoreOpener-ff0a2ee544a7097ab5ee60d8eb440ab3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ff0a2ee544a7097ab5ee60d8eb440ab3 2023-07-21 08:14:46,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af. 2023-07-21 08:14:46,593 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af. 2023-07-21 08:14:46,593 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1. 2023-07-21 08:14:46,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5ee93947c3001cb5cb982fba3369b7d1, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 08:14:46,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5ee93947c3001cb5cb982fba3369b7d1 2023-07-21 08:14:46,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:46,594 DEBUG [StoreOpener-ff0a2ee544a7097ab5ee60d8eb440ab3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3/f 2023-07-21 08:14:46,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 5ee93947c3001cb5cb982fba3369b7d1 2023-07-21 08:14:46,594 DEBUG [StoreOpener-ff0a2ee544a7097ab5ee60d8eb440ab3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3/f 2023-07-21 08:14:46,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 5ee93947c3001cb5cb982fba3369b7d1 2023-07-21 08:14:46,594 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=4d6bdfb8ad9a64373da3988f83bbb9af, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:46,594 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927286594"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927286594"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927286594"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927286594"}]},"ts":"1689927286594"} 2023-07-21 08:14:46,595 INFO [StoreOpener-ff0a2ee544a7097ab5ee60d8eb440ab3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ff0a2ee544a7097ab5ee60d8eb440ab3 columnFamilyName f 2023-07-21 08:14:46,597 INFO [StoreOpener-ff0a2ee544a7097ab5ee60d8eb440ab3-1] regionserver.HStore(310): Store=ff0a2ee544a7097ab5ee60d8eb440ab3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:46,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3 2023-07-21 08:14:46,598 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3 2023-07-21 08:14:46,600 INFO [StoreOpener-5ee93947c3001cb5cb982fba3369b7d1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5ee93947c3001cb5cb982fba3369b7d1 2023-07-21 08:14:46,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for ff0a2ee544a7097ab5ee60d8eb440ab3 2023-07-21 08:14:46,603 DEBUG [StoreOpener-5ee93947c3001cb5cb982fba3369b7d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1/f 2023-07-21 08:14:46,603 DEBUG [StoreOpener-5ee93947c3001cb5cb982fba3369b7d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1/f 2023-07-21 08:14:46,603 INFO [StoreOpener-5ee93947c3001cb5cb982fba3369b7d1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5ee93947c3001cb5cb982fba3369b7d1 columnFamilyName f 2023-07-21 08:14:46,605 INFO [StoreOpener-5ee93947c3001cb5cb982fba3369b7d1-1] regionserver.HStore(310): Store=5ee93947c3001cb5cb982fba3369b7d1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:46,606 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1 2023-07-21 08:14:46,606 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1 2023-07-21 08:14:46,608 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=56 2023-07-21 08:14:46,608 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=56, state=SUCCESS; OpenRegionProcedure 4d6bdfb8ad9a64373da3988f83bbb9af, server=jenkins-hbase5.apache.org,38059,1689927281154 in 190 msec 2023-07-21 08:14:46,610 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d6bdfb8ad9a64373da3988f83bbb9af, ASSIGN in 357 msec 2023-07-21 08:14:46,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 5ee93947c3001cb5cb982fba3369b7d1 2023-07-21 08:14:46,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:46,616 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 5ee93947c3001cb5cb982fba3369b7d1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9524954080, jitterRate=-0.11291952431201935}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:46,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 5ee93947c3001cb5cb982fba3369b7d1: 2023-07-21 08:14:46,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:46,618 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1., pid=61, masterSystemTime=1689927286568 2023-07-21 08:14:46,618 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened ff0a2ee544a7097ab5ee60d8eb440ab3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11280347840, jitterRate=0.05056425929069519}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:46,618 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for ff0a2ee544a7097ab5ee60d8eb440ab3: 2023-07-21 08:14:46,619 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3., pid=64, masterSystemTime=1689927286582 2023-07-21 08:14:46,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1. 2023-07-21 08:14:46,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1. 2023-07-21 08:14:46,621 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=5ee93947c3001cb5cb982fba3369b7d1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:46,622 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927286621"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927286621"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927286621"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927286621"}]},"ts":"1689927286621"} 2023-07-21 08:14:46,622 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3. 2023-07-21 08:14:46,622 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3. 2023-07-21 08:14:46,622 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89. 2023-07-21 08:14:46,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 24526adaf7248a326ea24354f69f7a89, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 08:14:46,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 24526adaf7248a326ea24354f69f7a89 2023-07-21 08:14:46,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:46,623 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=ff0a2ee544a7097ab5ee60d8eb440ab3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:46,623 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286623"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927286623"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927286623"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927286623"}]},"ts":"1689927286623"} 2023-07-21 08:14:46,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 24526adaf7248a326ea24354f69f7a89 2023-07-21 08:14:46,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 24526adaf7248a326ea24354f69f7a89 2023-07-21 08:14:46,629 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=60 2023-07-21 08:14:46,629 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; OpenRegionProcedure 5ee93947c3001cb5cb982fba3369b7d1, server=jenkins-hbase5.apache.org,38059,1689927281154 in 213 msec 2023-07-21 08:14:46,630 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=58 2023-07-21 08:14:46,630 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=58, state=SUCCESS; OpenRegionProcedure ff0a2ee544a7097ab5ee60d8eb440ab3, server=jenkins-hbase5.apache.org,37025,1689927277157 in 207 msec 2023-07-21 08:14:46,631 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ee93947c3001cb5cb982fba3369b7d1, ASSIGN in 378 msec 2023-07-21 08:14:46,632 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff0a2ee544a7097ab5ee60d8eb440ab3, ASSIGN in 379 msec 2023-07-21 08:14:46,636 INFO [StoreOpener-24526adaf7248a326ea24354f69f7a89-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 24526adaf7248a326ea24354f69f7a89 2023-07-21 08:14:46,638 DEBUG [StoreOpener-24526adaf7248a326ea24354f69f7a89-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89/f 2023-07-21 08:14:46,638 DEBUG [StoreOpener-24526adaf7248a326ea24354f69f7a89-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89/f 2023-07-21 08:14:46,639 INFO [StoreOpener-24526adaf7248a326ea24354f69f7a89-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 24526adaf7248a326ea24354f69f7a89 columnFamilyName f 2023-07-21 08:14:46,640 INFO [StoreOpener-24526adaf7248a326ea24354f69f7a89-1] regionserver.HStore(310): Store=24526adaf7248a326ea24354f69f7a89/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:46,643 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89 2023-07-21 08:14:46,643 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89 2023-07-21 08:14:46,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 24526adaf7248a326ea24354f69f7a89 2023-07-21 08:14:46,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:46,650 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 24526adaf7248a326ea24354f69f7a89; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11656828800, jitterRate=0.08562678098678589}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:46,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 24526adaf7248a326ea24354f69f7a89: 2023-07-21 08:14:46,650 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89., pid=65, masterSystemTime=1689927286582 2023-07-21 08:14:46,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89. 2023-07-21 08:14:46,652 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89. 2023-07-21 08:14:46,652 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1. 2023-07-21 08:14:46,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f1dd0f0b6b6075f2e82606d4c0e4abf1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 08:14:46,653 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=24526adaf7248a326ea24354f69f7a89, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:46,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f1dd0f0b6b6075f2e82606d4c0e4abf1 2023-07-21 08:14:46,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:46,653 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286653"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927286653"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927286653"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927286653"}]},"ts":"1689927286653"} 2023-07-21 08:14:46,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for f1dd0f0b6b6075f2e82606d4c0e4abf1 2023-07-21 08:14:46,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for f1dd0f0b6b6075f2e82606d4c0e4abf1 2023-07-21 08:14:46,655 INFO [StoreOpener-f1dd0f0b6b6075f2e82606d4c0e4abf1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f1dd0f0b6b6075f2e82606d4c0e4abf1 2023-07-21 08:14:46,657 DEBUG [StoreOpener-f1dd0f0b6b6075f2e82606d4c0e4abf1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1/f 2023-07-21 08:14:46,657 DEBUG [StoreOpener-f1dd0f0b6b6075f2e82606d4c0e4abf1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1/f 2023-07-21 08:14:46,658 INFO [StoreOpener-f1dd0f0b6b6075f2e82606d4c0e4abf1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f1dd0f0b6b6075f2e82606d4c0e4abf1 columnFamilyName f 2023-07-21 08:14:46,660 INFO [StoreOpener-f1dd0f0b6b6075f2e82606d4c0e4abf1-1] regionserver.HStore(310): Store=f1dd0f0b6b6075f2e82606d4c0e4abf1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:46,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1 2023-07-21 08:14:46,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1 2023-07-21 08:14:46,664 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=57 2023-07-21 08:14:46,664 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=57, state=SUCCESS; OpenRegionProcedure 24526adaf7248a326ea24354f69f7a89, server=jenkins-hbase5.apache.org,37025,1689927277157 in 232 msec 2023-07-21 08:14:46,666 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for f1dd0f0b6b6075f2e82606d4c0e4abf1 2023-07-21 08:14:46,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:46,669 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened f1dd0f0b6b6075f2e82606d4c0e4abf1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9921011040, jitterRate=-0.07603384554386139}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:46,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for f1dd0f0b6b6075f2e82606d4c0e4abf1: 2023-07-21 08:14:46,671 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1., pid=63, masterSystemTime=1689927286582 2023-07-21 08:14:46,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1. 2023-07-21 08:14:46,673 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1. 2023-07-21 08:14:46,675 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=24526adaf7248a326ea24354f69f7a89, ASSIGN in 413 msec 2023-07-21 08:14:46,675 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=f1dd0f0b6b6075f2e82606d4c0e4abf1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:46,676 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286675"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927286675"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927286675"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927286675"}]},"ts":"1689927286675"} 2023-07-21 08:14:46,685 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=59 2023-07-21 08:14:46,685 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=59, state=SUCCESS; OpenRegionProcedure f1dd0f0b6b6075f2e82606d4c0e4abf1, server=jenkins-hbase5.apache.org,37025,1689927277157 in 261 msec 2023-07-21 08:14:46,687 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=55 2023-07-21 08:14:46,687 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f1dd0f0b6b6075f2e82606d4c0e4abf1, ASSIGN in 434 msec 2023-07-21 08:14:46,687 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927286687"}]},"ts":"1689927286687"} 2023-07-21 08:14:46,690 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-21 08:14:46,692 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-21 08:14:46,695 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=55, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.0760 sec 2023-07-21 08:14:46,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-21 08:14:46,736 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 55 completed 2023-07-21 08:14:46,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:46,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:46,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:46,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:46,740 INFO [Listener at localhost/43961] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:46,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$11(2418): Client=jenkins//172.31.10.131 disable Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:46,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:46,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-21 08:14:46,752 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927286752"}]},"ts":"1689927286752"} 2023-07-21 08:14:46,754 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-21 08:14:46,756 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-21 08:14:46,758 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d6bdfb8ad9a64373da3988f83bbb9af, UNASSIGN}, {pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=24526adaf7248a326ea24354f69f7a89, UNASSIGN}, {pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff0a2ee544a7097ab5ee60d8eb440ab3, UNASSIGN}, {pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f1dd0f0b6b6075f2e82606d4c0e4abf1, UNASSIGN}, {pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ee93947c3001cb5cb982fba3369b7d1, UNASSIGN}] 2023-07-21 08:14:46,760 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ee93947c3001cb5cb982fba3369b7d1, UNASSIGN 2023-07-21 08:14:46,762 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f1dd0f0b6b6075f2e82606d4c0e4abf1, UNASSIGN 2023-07-21 08:14:46,762 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=24526adaf7248a326ea24354f69f7a89, UNASSIGN 2023-07-21 08:14:46,762 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff0a2ee544a7097ab5ee60d8eb440ab3, UNASSIGN 2023-07-21 08:14:46,763 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d6bdfb8ad9a64373da3988f83bbb9af, UNASSIGN 2023-07-21 08:14:46,764 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=5ee93947c3001cb5cb982fba3369b7d1, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:46,764 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=4d6bdfb8ad9a64373da3988f83bbb9af, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:46,764 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=f1dd0f0b6b6075f2e82606d4c0e4abf1, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:46,764 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927286764"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927286764"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927286764"}]},"ts":"1689927286764"} 2023-07-21 08:14:46,764 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=ff0a2ee544a7097ab5ee60d8eb440ab3, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:46,764 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286764"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927286764"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927286764"}]},"ts":"1689927286764"} 2023-07-21 08:14:46,764 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286764"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927286764"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927286764"}]},"ts":"1689927286764"} 2023-07-21 08:14:46,764 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=24526adaf7248a326ea24354f69f7a89, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:46,765 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286764"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927286764"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927286764"}]},"ts":"1689927286764"} 2023-07-21 08:14:46,764 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927286764"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927286764"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927286764"}]},"ts":"1689927286764"} 2023-07-21 08:14:46,767 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=67, state=RUNNABLE; CloseRegionProcedure 4d6bdfb8ad9a64373da3988f83bbb9af, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:46,769 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=69, state=RUNNABLE; CloseRegionProcedure ff0a2ee544a7097ab5ee60d8eb440ab3, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:46,770 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=70, state=RUNNABLE; CloseRegionProcedure f1dd0f0b6b6075f2e82606d4c0e4abf1, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:46,773 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=68, state=RUNNABLE; CloseRegionProcedure 24526adaf7248a326ea24354f69f7a89, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:46,776 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=71, state=RUNNABLE; CloseRegionProcedure 5ee93947c3001cb5cb982fba3369b7d1, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:46,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-21 08:14:46,922 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 4d6bdfb8ad9a64373da3988f83bbb9af 2023-07-21 08:14:46,924 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 4d6bdfb8ad9a64373da3988f83bbb9af, disabling compactions & flushes 2023-07-21 08:14:46,924 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af. 2023-07-21 08:14:46,924 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af. 2023-07-21 08:14:46,924 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af. after waiting 0 ms 2023-07-21 08:14:46,924 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af. 2023-07-21 08:14:46,925 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close ff0a2ee544a7097ab5ee60d8eb440ab3 2023-07-21 08:14:46,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing ff0a2ee544a7097ab5ee60d8eb440ab3, disabling compactions & flushes 2023-07-21 08:14:46,926 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3. 2023-07-21 08:14:46,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3. 2023-07-21 08:14:46,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3. after waiting 0 ms 2023-07-21 08:14:46,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3. 2023-07-21 08:14:46,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:46,931 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af. 2023-07-21 08:14:46,931 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 4d6bdfb8ad9a64373da3988f83bbb9af: 2023-07-21 08:14:46,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:46,934 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 4d6bdfb8ad9a64373da3988f83bbb9af 2023-07-21 08:14:46,934 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 5ee93947c3001cb5cb982fba3369b7d1 2023-07-21 08:14:46,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 5ee93947c3001cb5cb982fba3369b7d1, disabling compactions & flushes 2023-07-21 08:14:46,935 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1. 2023-07-21 08:14:46,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1. 2023-07-21 08:14:46,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1. after waiting 0 ms 2023-07-21 08:14:46,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1. 2023-07-21 08:14:46,935 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=4d6bdfb8ad9a64373da3988f83bbb9af, regionState=CLOSED 2023-07-21 08:14:46,935 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3. 2023-07-21 08:14:46,935 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927286935"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927286935"}]},"ts":"1689927286935"} 2023-07-21 08:14:46,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for ff0a2ee544a7097ab5ee60d8eb440ab3: 2023-07-21 08:14:46,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed ff0a2ee544a7097ab5ee60d8eb440ab3 2023-07-21 08:14:46,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close f1dd0f0b6b6075f2e82606d4c0e4abf1 2023-07-21 08:14:46,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing f1dd0f0b6b6075f2e82606d4c0e4abf1, disabling compactions & flushes 2023-07-21 08:14:46,939 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1. 2023-07-21 08:14:46,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1. 2023-07-21 08:14:46,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1. after waiting 0 ms 2023-07-21 08:14:46,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1. 2023-07-21 08:14:46,939 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=ff0a2ee544a7097ab5ee60d8eb440ab3, regionState=CLOSED 2023-07-21 08:14:46,939 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286939"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927286939"}]},"ts":"1689927286939"} 2023-07-21 08:14:46,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:46,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1. 2023-07-21 08:14:46,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 5ee93947c3001cb5cb982fba3369b7d1: 2023-07-21 08:14:46,946 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=67 2023-07-21 08:14:46,946 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=67, state=SUCCESS; CloseRegionProcedure 4d6bdfb8ad9a64373da3988f83bbb9af, server=jenkins-hbase5.apache.org,38059,1689927281154 in 170 msec 2023-07-21 08:14:46,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 5ee93947c3001cb5cb982fba3369b7d1 2023-07-21 08:14:46,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:46,948 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=69 2023-07-21 08:14:46,948 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4d6bdfb8ad9a64373da3988f83bbb9af, UNASSIGN in 188 msec 2023-07-21 08:14:46,949 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=69, state=SUCCESS; CloseRegionProcedure ff0a2ee544a7097ab5ee60d8eb440ab3, server=jenkins-hbase5.apache.org,37025,1689927277157 in 173 msec 2023-07-21 08:14:46,949 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=5ee93947c3001cb5cb982fba3369b7d1, regionState=CLOSED 2023-07-21 08:14:46,949 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689927286949"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927286949"}]},"ts":"1689927286949"} 2023-07-21 08:14:46,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1. 2023-07-21 08:14:46,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for f1dd0f0b6b6075f2e82606d4c0e4abf1: 2023-07-21 08:14:46,951 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff0a2ee544a7097ab5ee60d8eb440ab3, UNASSIGN in 191 msec 2023-07-21 08:14:46,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed f1dd0f0b6b6075f2e82606d4c0e4abf1 2023-07-21 08:14:46,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 24526adaf7248a326ea24354f69f7a89 2023-07-21 08:14:46,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 24526adaf7248a326ea24354f69f7a89, disabling compactions & flushes 2023-07-21 08:14:46,953 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89. 2023-07-21 08:14:46,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89. 2023-07-21 08:14:46,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89. after waiting 0 ms 2023-07-21 08:14:46,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89. 2023-07-21 08:14:46,955 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=f1dd0f0b6b6075f2e82606d4c0e4abf1, regionState=CLOSED 2023-07-21 08:14:46,955 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286955"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927286955"}]},"ts":"1689927286955"} 2023-07-21 08:14:46,960 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=71 2023-07-21 08:14:46,960 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=71, state=SUCCESS; CloseRegionProcedure 5ee93947c3001cb5cb982fba3369b7d1, server=jenkins-hbase5.apache.org,38059,1689927281154 in 178 msec 2023-07-21 08:14:46,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:46,962 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89. 2023-07-21 08:14:46,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 24526adaf7248a326ea24354f69f7a89: 2023-07-21 08:14:46,962 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=70 2023-07-21 08:14:46,962 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ee93947c3001cb5cb982fba3369b7d1, UNASSIGN in 202 msec 2023-07-21 08:14:46,962 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=70, state=SUCCESS; CloseRegionProcedure f1dd0f0b6b6075f2e82606d4c0e4abf1, server=jenkins-hbase5.apache.org,37025,1689927277157 in 187 msec 2023-07-21 08:14:46,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 24526adaf7248a326ea24354f69f7a89 2023-07-21 08:14:46,964 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f1dd0f0b6b6075f2e82606d4c0e4abf1, UNASSIGN in 204 msec 2023-07-21 08:14:46,964 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=24526adaf7248a326ea24354f69f7a89, regionState=CLOSED 2023-07-21 08:14:46,965 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689927286964"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927286964"}]},"ts":"1689927286964"} 2023-07-21 08:14:46,968 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=68 2023-07-21 08:14:46,969 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=68, state=SUCCESS; CloseRegionProcedure 24526adaf7248a326ea24354f69f7a89, server=jenkins-hbase5.apache.org,37025,1689927277157 in 193 msec 2023-07-21 08:14:46,971 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=66 2023-07-21 08:14:46,971 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=24526adaf7248a326ea24354f69f7a89, UNASSIGN in 211 msec 2023-07-21 08:14:46,972 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927286972"}]},"ts":"1689927286972"} 2023-07-21 08:14:46,974 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-21 08:14:46,975 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-21 08:14:46,980 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 235 msec 2023-07-21 08:14:47,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-21 08:14:47,053 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 66 completed 2023-07-21 08:14:47,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$5(2228): Client=jenkins//172.31.10.131 delete Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:47,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:47,068 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:47,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_854977135' 2023-07-21 08:14:47,070 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=77, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:47,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:47,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:47,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:47,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-21 08:14:47,088 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af 2023-07-21 08:14:47,088 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89 2023-07-21 08:14:47,088 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1 2023-07-21 08:14:47,088 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1 2023-07-21 08:14:47,088 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3 2023-07-21 08:14:47,093 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89/recovered.edits] 2023-07-21 08:14:47,093 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af/recovered.edits] 2023-07-21 08:14:47,094 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3/recovered.edits] 2023-07-21 08:14:47,094 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1/recovered.edits] 2023-07-21 08:14:47,096 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1/recovered.edits] 2023-07-21 08:14:47,108 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1/recovered.edits/4.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1/recovered.edits/4.seqid 2023-07-21 08:14:47,109 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89/recovered.edits/4.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89/recovered.edits/4.seqid 2023-07-21 08:14:47,109 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f1dd0f0b6b6075f2e82606d4c0e4abf1 2023-07-21 08:14:47,109 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af/recovered.edits/4.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af/recovered.edits/4.seqid 2023-07-21 08:14:47,110 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1/recovered.edits/4.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1/recovered.edits/4.seqid 2023-07-21 08:14:47,110 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/24526adaf7248a326ea24354f69f7a89 2023-07-21 08:14:47,110 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4d6bdfb8ad9a64373da3988f83bbb9af 2023-07-21 08:14:47,110 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ee93947c3001cb5cb982fba3369b7d1 2023-07-21 08:14:47,112 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3/recovered.edits/4.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3/recovered.edits/4.seqid 2023-07-21 08:14:47,113 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff0a2ee544a7097ab5ee60d8eb440ab3 2023-07-21 08:14:47,113 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 08:14:47,116 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=77, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:47,122 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-21 08:14:47,125 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-21 08:14:47,126 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=77, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:47,126 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-21 08:14:47,127 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927287126"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:47,127 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927287126"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:47,127 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927287126"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:47,127 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927287126"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:47,127 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927287126"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:47,134 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 08:14:47,134 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 4d6bdfb8ad9a64373da3988f83bbb9af, NAME => 'Group_testTableMoveTruncateAndDrop,,1689927285672.4d6bdfb8ad9a64373da3988f83bbb9af.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 24526adaf7248a326ea24354f69f7a89, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689927285672.24526adaf7248a326ea24354f69f7a89.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => ff0a2ee544a7097ab5ee60d8eb440ab3, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689927285672.ff0a2ee544a7097ab5ee60d8eb440ab3.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => f1dd0f0b6b6075f2e82606d4c0e4abf1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689927285672.f1dd0f0b6b6075f2e82606d4c0e4abf1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 5ee93947c3001cb5cb982fba3369b7d1, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689927285672.5ee93947c3001cb5cb982fba3369b7d1.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 08:14:47,134 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-21 08:14:47,134 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689927287134"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:47,137 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-21 08:14:47,140 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=77, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 08:14:47,142 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 80 msec 2023-07-21 08:14:47,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-21 08:14:47,188 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 77 completed 2023-07-21 08:14:47,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:47,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:47,194 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37025] ipc.CallRunner(144): callId: 161 service: ClientService methodName: Scan size: 147 connection: 172.31.10.131:34006 deadline: 1689927347193, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase5.apache.org port=40889 startCode=1689927276956. As of locationSeqNum=6. 2023-07-21 08:14:47,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:47,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:47,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:47,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:14:47,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:47,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:14:47,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:47,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 08:14:47,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:47,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:47,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:47,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:47,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:37025] to rsgroup default 2023-07-21 08:14:47,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:47,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:14:47,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_854977135, current retry=0 2023-07-21 08:14:47,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,37025,1689927277157, jenkins-hbase5.apache.org,38059,1689927281154] are moved back to Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:47,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_854977135 => default 2023-07-21 08:14:47,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:47,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup Group_testTableMoveTruncateAndDrop_854977135 2023-07-21 08:14:47,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:14:47,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:47,352 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:14:47,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:14:47,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:47,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:14:47,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:47,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:14:47,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:47,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928487369, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:14:47,370 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:14:47,372 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:47,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,374 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:14:47,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:47,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:47,408 INFO [Listener at localhost/43961] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=495 (was 422) Potentially hanging thread: PacketResponder: BP-1462393125-172.31.10.131-1689927271631:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59404@0x0b2160ba-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp733013092-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp733013092-635 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_174850753_17 at /127.0.0.1:47786 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:40383 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase5:38059 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59404@0x0b2160ba sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/679935031.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp733013092-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1636371636_17 at /127.0.0.1:47720 [Receiving block BP-1462393125-172.31.10.131-1689927271631:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase5:38059-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp733013092-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1636371636_17 at /127.0.0.1:47264 [Receiving block BP-1462393125-172.31.10.131-1689927271631:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RSProcedureDispatcher-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1636371636_17 at /127.0.0.1:36182 [Receiving block BP-1462393125-172.31.10.131-1689927271631:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b-prefix:jenkins-hbase5.apache.org,38059,1689927281154 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7ff3ba12-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp733013092-636-acceptor-0@59668b65-ServerConnector@614a5820{HTTP/1.1, (http/1.1)}{0.0.0.0:46337} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1462393125-172.31.10.131-1689927271631:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59404@0x0b2160ba-SendThread(127.0.0.1:59404) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp733013092-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase5:38059Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp733013092-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1462393125-172.31.10.131-1689927271631:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38059 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (732687102) connection to localhost/127.0.0.1:40383 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp733013092-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=786 (was 677) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=531 (was 517) - SystemLoadAverage LEAK? -, ProcessCount=167 (was 168), AvailableMemoryMB=3235 (was 3629) 2023-07-21 08:14:47,430 INFO [Listener at localhost/43961] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=495, OpenFileDescriptor=786, MaxFileDescriptor=60000, SystemLoadAverage=531, ProcessCount=167, AvailableMemoryMB=3233 2023-07-21 08:14:47,430 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-21 08:14:47,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:47,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:47,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:47,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:14:47,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:47,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:14:47,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:14:47,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:47,454 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:14:47,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:14:47,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:47,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:14:47,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:47,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:14:47,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:47,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928487472, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:14:47,473 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:14:47,474 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:47,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,475 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:14:47,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:47,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:47,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup foo* 2023-07-21 08:14:47,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:47,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.10.131:57944 deadline: 1689928487478, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 08:14:47,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup foo@ 2023-07-21 08:14:47,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:47,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.10.131:57944 deadline: 1689928487480, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 08:14:47,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup - 2023-07-21 08:14:47,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:47,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.10.131:57944 deadline: 1689928487482, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 08:14:47,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup foo_123 2023-07-21 08:14:47,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-21 08:14:47,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:47,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:47,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:47,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:47,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:47,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:47,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:14:47,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:47,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup foo_123 2023-07-21 08:14:47,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:47,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 08:14:47,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:47,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:47,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:47,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:47,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:14:47,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:47,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:14:47,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:14:47,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:47,562 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:14:47,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:14:47,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:47,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:14:47,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:47,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:14:47,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:47,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928487588, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:14:47,589 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:14:47,591 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:47,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,593 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:14:47,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:47,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:47,615 INFO [Listener at localhost/43961] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=498 (was 495) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=786 (was 786), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=531 (was 531), ProcessCount=167 (was 167), AvailableMemoryMB=3226 (was 3233) 2023-07-21 08:14:47,636 INFO [Listener at localhost/43961] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=498, OpenFileDescriptor=786, MaxFileDescriptor=60000, SystemLoadAverage=531, ProcessCount=167, AvailableMemoryMB=3224 2023-07-21 08:14:47,636 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-21 08:14:47,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:47,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:47,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:47,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:14:47,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:47,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:14:47,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:14:47,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:47,656 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:14:47,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:14:47,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:47,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:14:47,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:47,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:14:47,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:47,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928487673, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:14:47,674 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:14:47,677 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:47,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,679 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:14:47,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:47,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:47,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:47,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:47,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup bar 2023-07-21 08:14:47,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 08:14:47,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:47,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:47,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:47,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:47,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:47,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:37025] to rsgroup bar 2023-07-21 08:14:47,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:47,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 08:14:47,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:47,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:47,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-21 08:14:47,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 08:14:47,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 08:14:47,710 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 08:14:47,711 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase5.apache.org,40169,1689927277346, state=CLOSING 2023-07-21 08:14:47,713 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 08:14:47,713 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:14:47,713 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 08:14:47,868 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-21 08:14:47,870 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 08:14:47,870 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 08:14:47,870 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 08:14:47,870 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 08:14:47,870 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 08:14:47,872 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=43.12 KB heapSize=66.86 KB 2023-07-21 08:14:47,938 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=40.06 KB at sequenceid=98 (bloomFilter=false), to=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/.tmp/info/01d199fed890416fb33e37e0a1497d00 2023-07-21 08:14:47,954 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 01d199fed890416fb33e37e0a1497d00 2023-07-21 08:14:47,991 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=98 (bloomFilter=false), to=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/.tmp/rep_barrier/66b041821f69497db882f4b44a016bf2 2023-07-21 08:14:47,999 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 66b041821f69497db882f4b44a016bf2 2023-07-21 08:14:48,035 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.91 KB at sequenceid=98 (bloomFilter=false), to=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/.tmp/table/763477a22037401c85835099a2439057 2023-07-21 08:14:48,041 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 763477a22037401c85835099a2439057 2023-07-21 08:14:48,043 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/.tmp/info/01d199fed890416fb33e37e0a1497d00 as hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/info/01d199fed890416fb33e37e0a1497d00 2023-07-21 08:14:48,051 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 01d199fed890416fb33e37e0a1497d00 2023-07-21 08:14:48,052 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/info/01d199fed890416fb33e37e0a1497d00, entries=50, sequenceid=98, filesize=10.6 K 2023-07-21 08:14:48,054 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/.tmp/rep_barrier/66b041821f69497db882f4b44a016bf2 as hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/rep_barrier/66b041821f69497db882f4b44a016bf2 2023-07-21 08:14:48,061 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 66b041821f69497db882f4b44a016bf2 2023-07-21 08:14:48,061 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/rep_barrier/66b041821f69497db882f4b44a016bf2, entries=10, sequenceid=98, filesize=6.1 K 2023-07-21 08:14:48,063 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/.tmp/table/763477a22037401c85835099a2439057 as hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/table/763477a22037401c85835099a2439057 2023-07-21 08:14:48,071 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 763477a22037401c85835099a2439057 2023-07-21 08:14:48,071 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/table/763477a22037401c85835099a2439057, entries=15, sequenceid=98, filesize=6.2 K 2023-07-21 08:14:48,072 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~43.12 KB/44157, heapSize ~66.81 KB/68416, currentSize=0 B/0 for 1588230740 in 201ms, sequenceid=98, compaction requested=false 2023-07-21 08:14:48,089 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/recovered.edits/101.seqid, newMaxSeqId=101, maxSeqId=1 2023-07-21 08:14:48,089 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 08:14:48,090 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 08:14:48,090 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 08:14:48,090 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase5.apache.org,40889,1689927276956 record at close sequenceid=98 2023-07-21 08:14:48,092 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-21 08:14:48,093 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-21 08:14:48,095 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-21 08:14:48,095 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase5.apache.org,40169,1689927277346 in 380 msec 2023-07-21 08:14:48,096 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,40889,1689927276956; forceNewPlan=false, retain=false 2023-07-21 08:14:48,246 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase5.apache.org,40889,1689927276956, state=OPENING 2023-07-21 08:14:48,248 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 08:14:48,248 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 08:14:48,248 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:48,404 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 08:14:48,405 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:14:48,406 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C40889%2C1689927276956.meta, suffix=.meta, logDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,40889,1689927276956, archiveDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/oldWALs, maxLogs=32 2023-07-21 08:14:48,428 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK] 2023-07-21 08:14:48,428 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK] 2023-07-21 08:14:48,428 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK] 2023-07-21 08:14:48,430 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,40889,1689927276956/jenkins-hbase5.apache.org%2C40889%2C1689927276956.meta.1689927288408.meta 2023-07-21 08:14:48,430 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK], DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK], DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK]] 2023-07-21 08:14:48,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:48,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 08:14:48,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 08:14:48,431 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 08:14:48,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 08:14:48,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:48,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 08:14:48,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 08:14:48,433 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 08:14:48,434 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/info 2023-07-21 08:14:48,434 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/info 2023-07-21 08:14:48,435 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 08:14:48,442 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 01d199fed890416fb33e37e0a1497d00 2023-07-21 08:14:48,442 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/info/01d199fed890416fb33e37e0a1497d00 2023-07-21 08:14:48,442 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:48,442 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 08:14:48,443 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/rep_barrier 2023-07-21 08:14:48,443 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/rep_barrier 2023-07-21 08:14:48,444 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 08:14:48,451 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 66b041821f69497db882f4b44a016bf2 2023-07-21 08:14:48,451 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/rep_barrier/66b041821f69497db882f4b44a016bf2 2023-07-21 08:14:48,452 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:48,452 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 08:14:48,453 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/table 2023-07-21 08:14:48,453 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/table 2023-07-21 08:14:48,453 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 08:14:48,463 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 763477a22037401c85835099a2439057 2023-07-21 08:14:48,463 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/table/763477a22037401c85835099a2439057 2023-07-21 08:14:48,463 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:48,464 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740 2023-07-21 08:14:48,465 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740 2023-07-21 08:14:48,468 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 08:14:48,469 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 08:14:48,470 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=102; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10469463200, jitterRate=-0.024955257773399353}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 08:14:48,470 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 08:14:48,471 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=80, masterSystemTime=1689927288400 2023-07-21 08:14:48,473 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 08:14:48,473 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 08:14:48,474 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase5.apache.org,40889,1689927276956, state=OPEN 2023-07-21 08:14:48,476 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 08:14:48,476 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 08:14:48,479 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-21 08:14:48,479 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase5.apache.org,40889,1689927276956 in 228 msec 2023-07-21 08:14:48,481 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 772 msec 2023-07-21 08:14:48,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-21 08:14:48,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,37025,1689927277157, jenkins-hbase5.apache.org,38059,1689927281154, jenkins-hbase5.apache.org,40169,1689927277346] are moved back to default 2023-07-21 08:14:48,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-21 08:14:48,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:48,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:48,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:48,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=bar 2023-07-21 08:14:48,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:48,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$4(2112): Client=jenkins//172.31.10.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:14:48,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-21 08:14:48,720 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:14:48,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(700): Client=jenkins//172.31.10.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-21 08:14:48,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 08:14:48,723 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:48,723 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 08:14:48,724 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:48,724 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:48,726 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:14:48,727 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40169] ipc.CallRunner(144): callId: 190 service: ClientService methodName: Get size: 142 connection: 172.31.10.131:54466 deadline: 1689927348727, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase5.apache.org port=40889 startCode=1689927276956. As of locationSeqNum=98. 2023-07-21 08:14:48,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 08:14:48,829 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:48,829 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76 empty. 2023-07-21 08:14:48,830 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:48,830 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-21 08:14:48,845 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-21 08:14:48,846 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => d04ae912ae7c6bc3f051d0d04fc23d76, NAME => 'Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:48,856 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:48,856 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing d04ae912ae7c6bc3f051d0d04fc23d76, disabling compactions & flushes 2023-07-21 08:14:48,857 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:48,857 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:48,857 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. after waiting 0 ms 2023-07-21 08:14:48,857 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:48,857 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:48,857 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for d04ae912ae7c6bc3f051d0d04fc23d76: 2023-07-21 08:14:48,859 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:14:48,860 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689927288860"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927288860"}]},"ts":"1689927288860"} 2023-07-21 08:14:48,862 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:14:48,863 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:14:48,863 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927288863"}]},"ts":"1689927288863"} 2023-07-21 08:14:48,865 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-21 08:14:48,874 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, ASSIGN}] 2023-07-21 08:14:48,876 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, ASSIGN 2023-07-21 08:14:48,877 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40889,1689927276956; forceNewPlan=false, retain=false 2023-07-21 08:14:49,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 08:14:49,028 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=d04ae912ae7c6bc3f051d0d04fc23d76, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:49,029 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689927289028"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927289028"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927289028"}]},"ts":"1689927289028"} 2023-07-21 08:14:49,030 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure d04ae912ae7c6bc3f051d0d04fc23d76, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:49,186 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:49,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d04ae912ae7c6bc3f051d0d04fc23d76, NAME => 'Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:49,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:49,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,189 INFO [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,190 DEBUG [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/f 2023-07-21 08:14:49,190 DEBUG [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/f 2023-07-21 08:14:49,191 INFO [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d04ae912ae7c6bc3f051d0d04fc23d76 columnFamilyName f 2023-07-21 08:14:49,192 INFO [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] regionserver.HStore(310): Store=d04ae912ae7c6bc3f051d0d04fc23d76/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:49,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:49,199 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened d04ae912ae7c6bc3f051d0d04fc23d76; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9513480480, jitterRate=-0.11398808658123016}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:49,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for d04ae912ae7c6bc3f051d0d04fc23d76: 2023-07-21 08:14:49,200 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76., pid=83, masterSystemTime=1689927289182 2023-07-21 08:14:49,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:49,201 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:49,202 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=d04ae912ae7c6bc3f051d0d04fc23d76, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:49,202 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689927289202"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927289202"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927289202"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927289202"}]},"ts":"1689927289202"} 2023-07-21 08:14:49,206 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-21 08:14:49,206 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure d04ae912ae7c6bc3f051d0d04fc23d76, server=jenkins-hbase5.apache.org,40889,1689927276956 in 174 msec 2023-07-21 08:14:49,211 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-21 08:14:49,211 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, ASSIGN in 332 msec 2023-07-21 08:14:49,211 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:14:49,212 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927289212"}]},"ts":"1689927289212"} 2023-07-21 08:14:49,213 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-21 08:14:49,215 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:14:49,218 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 498 msec 2023-07-21 08:14:49,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 08:14:49,326 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-21 08:14:49,326 DEBUG [Listener at localhost/43961] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-21 08:14:49,326 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:49,327 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40169] ipc.CallRunner(144): callId: 275 service: ClientService methodName: Scan size: 96 connection: 172.31.10.131:54474 deadline: 1689927349326, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase5.apache.org port=40889 startCode=1689927276956. As of locationSeqNum=98. 2023-07-21 08:14:49,430 DEBUG [hconnection-0x5a2c0b37-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:14:49,432 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:48080, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:14:49,440 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-21 08:14:49,440 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:49,441 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-21 08:14:49,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-21 08:14:49,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:49,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 08:14:49,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:49,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:49,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-21 08:14:49,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(345): Moving region d04ae912ae7c6bc3f051d0d04fc23d76 to RSGroup bar 2023-07-21 08:14:49,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:49,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:49,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:49,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:14:49,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 08:14:49,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:49,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, REOPEN/MOVE 2023-07-21 08:14:49,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-21 08:14:49,451 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, REOPEN/MOVE 2023-07-21 08:14:49,452 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=d04ae912ae7c6bc3f051d0d04fc23d76, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:49,452 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689927289452"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927289452"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927289452"}]},"ts":"1689927289452"} 2023-07-21 08:14:49,454 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure d04ae912ae7c6bc3f051d0d04fc23d76, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:49,609 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing d04ae912ae7c6bc3f051d0d04fc23d76, disabling compactions & flushes 2023-07-21 08:14:49,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:49,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:49,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. after waiting 0 ms 2023-07-21 08:14:49,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:49,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:49,616 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:49,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for d04ae912ae7c6bc3f051d0d04fc23d76: 2023-07-21 08:14:49,616 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding d04ae912ae7c6bc3f051d0d04fc23d76 move to jenkins-hbase5.apache.org,38059,1689927281154 record at close sequenceid=2 2023-07-21 08:14:49,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,623 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=d04ae912ae7c6bc3f051d0d04fc23d76, regionState=CLOSED 2023-07-21 08:14:49,623 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689927289623"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927289623"}]},"ts":"1689927289623"} 2023-07-21 08:14:49,627 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-21 08:14:49,627 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure d04ae912ae7c6bc3f051d0d04fc23d76, server=jenkins-hbase5.apache.org,40889,1689927276956 in 171 msec 2023-07-21 08:14:49,628 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,38059,1689927281154; forceNewPlan=false, retain=false 2023-07-21 08:14:49,778 INFO [jenkins-hbase5:46585] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 08:14:49,779 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=d04ae912ae7c6bc3f051d0d04fc23d76, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:49,779 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689927289779"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927289779"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927289779"}]},"ts":"1689927289779"} 2023-07-21 08:14:49,781 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure d04ae912ae7c6bc3f051d0d04fc23d76, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:49,940 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:49,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d04ae912ae7c6bc3f051d0d04fc23d76, NAME => 'Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:49,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:49,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,943 INFO [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,944 DEBUG [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/f 2023-07-21 08:14:49,945 DEBUG [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/f 2023-07-21 08:14:49,945 INFO [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d04ae912ae7c6bc3f051d0d04fc23d76 columnFamilyName f 2023-07-21 08:14:49,946 INFO [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] regionserver.HStore(310): Store=d04ae912ae7c6bc3f051d0d04fc23d76/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:49,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:49,959 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened d04ae912ae7c6bc3f051d0d04fc23d76; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11782912800, jitterRate=0.09736926853656769}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:49,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for d04ae912ae7c6bc3f051d0d04fc23d76: 2023-07-21 08:14:49,960 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76., pid=86, masterSystemTime=1689927289933 2023-07-21 08:14:49,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:49,963 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:49,963 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=d04ae912ae7c6bc3f051d0d04fc23d76, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:49,964 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689927289963"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927289963"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927289963"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927289963"}]},"ts":"1689927289963"} 2023-07-21 08:14:49,969 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-21 08:14:49,969 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure d04ae912ae7c6bc3f051d0d04fc23d76, server=jenkins-hbase5.apache.org,38059,1689927281154 in 185 msec 2023-07-21 08:14:49,974 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, REOPEN/MOVE in 520 msec 2023-07-21 08:14:50,357 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 08:14:50,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-21 08:14:50,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-21 08:14:50,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:50,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:50,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:50,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=bar 2023-07-21 08:14:50,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:50,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup bar 2023-07-21 08:14:50,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:50,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 285 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.10.131:57944 deadline: 1689928490460, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-21 08:14:50,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:37025] to rsgroup default 2023-07-21 08:14:50,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:50,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.10.131:57944 deadline: 1689928490462, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-21 08:14:50,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-21 08:14:50,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:50,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 08:14:50,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:50,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:50,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-21 08:14:50,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(345): Moving region d04ae912ae7c6bc3f051d0d04fc23d76 to RSGroup default 2023-07-21 08:14:50,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, REOPEN/MOVE 2023-07-21 08:14:50,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 08:14:50,473 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, REOPEN/MOVE 2023-07-21 08:14:50,474 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=d04ae912ae7c6bc3f051d0d04fc23d76, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:50,475 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689927290474"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927290474"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927290474"}]},"ts":"1689927290474"} 2023-07-21 08:14:50,479 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure d04ae912ae7c6bc3f051d0d04fc23d76, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:50,632 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:50,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing d04ae912ae7c6bc3f051d0d04fc23d76, disabling compactions & flushes 2023-07-21 08:14:50,635 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:50,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:50,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. after waiting 0 ms 2023-07-21 08:14:50,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:50,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 08:14:50,641 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:50,641 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for d04ae912ae7c6bc3f051d0d04fc23d76: 2023-07-21 08:14:50,641 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding d04ae912ae7c6bc3f051d0d04fc23d76 move to jenkins-hbase5.apache.org,40889,1689927276956 record at close sequenceid=5 2023-07-21 08:14:50,643 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:50,644 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=d04ae912ae7c6bc3f051d0d04fc23d76, regionState=CLOSED 2023-07-21 08:14:50,644 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689927290644"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927290644"}]},"ts":"1689927290644"} 2023-07-21 08:14:50,647 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-21 08:14:50,647 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure d04ae912ae7c6bc3f051d0d04fc23d76, server=jenkins-hbase5.apache.org,38059,1689927281154 in 169 msec 2023-07-21 08:14:50,647 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,40889,1689927276956; forceNewPlan=false, retain=false 2023-07-21 08:14:50,798 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=d04ae912ae7c6bc3f051d0d04fc23d76, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:50,798 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689927290798"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927290798"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927290798"}]},"ts":"1689927290798"} 2023-07-21 08:14:50,800 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure d04ae912ae7c6bc3f051d0d04fc23d76, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:50,956 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:50,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d04ae912ae7c6bc3f051d0d04fc23d76, NAME => 'Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:50,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:50,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:50,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:50,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:50,959 INFO [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:50,961 DEBUG [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/f 2023-07-21 08:14:50,961 DEBUG [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/f 2023-07-21 08:14:50,961 INFO [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d04ae912ae7c6bc3f051d0d04fc23d76 columnFamilyName f 2023-07-21 08:14:50,962 INFO [StoreOpener-d04ae912ae7c6bc3f051d0d04fc23d76-1] regionserver.HStore(310): Store=d04ae912ae7c6bc3f051d0d04fc23d76/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:50,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:50,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:50,969 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:50,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened d04ae912ae7c6bc3f051d0d04fc23d76; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10226793760, jitterRate=-0.04755561053752899}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:50,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for d04ae912ae7c6bc3f051d0d04fc23d76: 2023-07-21 08:14:50,971 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76., pid=89, masterSystemTime=1689927290952 2023-07-21 08:14:50,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:50,973 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:50,974 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=d04ae912ae7c6bc3f051d0d04fc23d76, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:50,974 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689927290974"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927290974"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927290974"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927290974"}]},"ts":"1689927290974"} 2023-07-21 08:14:50,978 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-21 08:14:50,978 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure d04ae912ae7c6bc3f051d0d04fc23d76, server=jenkins-hbase5.apache.org,40889,1689927276956 in 176 msec 2023-07-21 08:14:50,981 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, REOPEN/MOVE in 508 msec 2023-07-21 08:14:51,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-21 08:14:51,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-21 08:14:51,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:51,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:51,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:51,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup bar 2023-07-21 08:14:51,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:51,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 294 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.10.131:57944 deadline: 1689928491484, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-21 08:14:51,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:37025] to rsgroup default 2023-07-21 08:14:51,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:51,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 08:14:51,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:51,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:51,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-21 08:14:51,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,37025,1689927277157, jenkins-hbase5.apache.org,38059,1689927281154, jenkins-hbase5.apache.org,40169,1689927277346] are moved back to bar 2023-07-21 08:14:51,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-21 08:14:51,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:51,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:51,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:51,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup bar 2023-07-21 08:14:51,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:51,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:51,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 08:14:51,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:51,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:51,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:51,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:51,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:51,518 INFO [Listener at localhost/43961] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-21 08:14:51,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$11(2418): Client=jenkins//172.31.10.131 disable Group_testFailRemoveGroup 2023-07-21 08:14:51,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-21 08:14:51,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-21 08:14:51,523 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927291523"}]},"ts":"1689927291523"} 2023-07-21 08:14:51,524 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-21 08:14:51,526 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-21 08:14:51,527 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, UNASSIGN}] 2023-07-21 08:14:51,529 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, UNASSIGN 2023-07-21 08:14:51,531 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=d04ae912ae7c6bc3f051d0d04fc23d76, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:51,531 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689927291531"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927291531"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927291531"}]},"ts":"1689927291531"} 2023-07-21 08:14:51,533 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure d04ae912ae7c6bc3f051d0d04fc23d76, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:51,603 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-21 08:14:51,604 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 08:14:51,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-21 08:14:51,685 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:51,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing d04ae912ae7c6bc3f051d0d04fc23d76, disabling compactions & flushes 2023-07-21 08:14:51,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:51,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:51,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. after waiting 0 ms 2023-07-21 08:14:51,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:51,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 08:14:51,692 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76. 2023-07-21 08:14:51,692 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for d04ae912ae7c6bc3f051d0d04fc23d76: 2023-07-21 08:14:51,694 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:51,695 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=d04ae912ae7c6bc3f051d0d04fc23d76, regionState=CLOSED 2023-07-21 08:14:51,695 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689927291695"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927291695"}]},"ts":"1689927291695"} 2023-07-21 08:14:51,701 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-21 08:14:51,701 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure d04ae912ae7c6bc3f051d0d04fc23d76, server=jenkins-hbase5.apache.org,40889,1689927276956 in 164 msec 2023-07-21 08:14:51,703 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-21 08:14:51,703 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d04ae912ae7c6bc3f051d0d04fc23d76, UNASSIGN in 174 msec 2023-07-21 08:14:51,704 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927291704"}]},"ts":"1689927291704"} 2023-07-21 08:14:51,705 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-21 08:14:51,707 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-21 08:14:51,710 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 189 msec 2023-07-21 08:14:51,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-21 08:14:51,825 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-21 08:14:51,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$5(2228): Client=jenkins//172.31.10.131 delete Group_testFailRemoveGroup 2023-07-21 08:14:51,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 08:14:51,830 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 08:14:51,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-21 08:14:51,831 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 08:14:51,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:51,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:51,836 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:51,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:14:51,838 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/recovered.edits] 2023-07-21 08:14:51,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-21 08:14:51,844 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/recovered.edits/10.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76/recovered.edits/10.seqid 2023-07-21 08:14:51,845 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testFailRemoveGroup/d04ae912ae7c6bc3f051d0d04fc23d76 2023-07-21 08:14:51,845 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-21 08:14:51,848 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 08:14:51,852 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-21 08:14:51,858 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-21 08:14:51,859 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 08:14:51,859 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-21 08:14:51,860 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927291859"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:51,864 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 08:14:51,864 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => d04ae912ae7c6bc3f051d0d04fc23d76, NAME => 'Group_testFailRemoveGroup,,1689927288717.d04ae912ae7c6bc3f051d0d04fc23d76.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 08:14:51,865 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-21 08:14:51,865 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689927291865"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:51,870 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-21 08:14:51,873 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 08:14:51,874 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 47 msec 2023-07-21 08:14:51,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-21 08:14:51,941 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-21 08:14:51,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:51,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:51,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:51,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:51,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:51,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:14:51,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:51,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:14:51,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:51,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:14:51,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:51,966 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:14:51,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:14:51,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:51,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:51,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:14:51,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:51,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:51,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:51,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:14:51,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:51,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 342 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928491981, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:14:51,981 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:14:51,983 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:51,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:51,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:51,984 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:14:51,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:51,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:52,007 INFO [Listener at localhost/43961] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=517 (was 498) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1636371636_17 at /127.0.0.1:42706 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_174850753_17 at /127.0.0.1:42654 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_174850753_17 at /127.0.0.1:47786 [Waiting for operation #15] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1462393125-172.31.10.131-1689927271631:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2015994353_17 at /127.0.0.1:57640 [Receiving block BP-1462393125-172.31.10.131-1689927271631:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1462393125-172.31.10.131-1689927271631:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b-prefix:jenkins-hbase5.apache.org,40889,1689927276956.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2015994353_17 at /127.0.0.1:39738 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase5:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_174850753_17 at /127.0.0.1:57622 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2015994353_17 at /127.0.0.1:39726 [Receiving block BP-1462393125-172.31.10.131-1689927271631:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a2c0b37-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1462393125-172.31.10.131-1689927271631:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2015994353_17 at /127.0.0.1:57670 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2015994353_17 at /127.0.0.1:42690 [Receiving block BP-1462393125-172.31.10.131-1689927271631:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=810 (was 786) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=537 (was 531) - SystemLoadAverage LEAK? -, ProcessCount=166 (was 167), AvailableMemoryMB=2960 (was 3224) 2023-07-21 08:14:52,009 WARN [Listener at localhost/43961] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-21 08:14:52,028 INFO [Listener at localhost/43961] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=517, OpenFileDescriptor=810, MaxFileDescriptor=60000, SystemLoadAverage=537, ProcessCount=166, AvailableMemoryMB=2959 2023-07-21 08:14:52,028 WARN [Listener at localhost/43961] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-21 08:14:52,028 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-21 08:14:52,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:52,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:52,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:52,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:52,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:52,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:14:52,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:52,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:14:52,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:52,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:14:52,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:52,047 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:14:52,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:14:52,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:52,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:52,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:14:52,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:52,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:52,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:52,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:14:52,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:52,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928492061, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:14:52,062 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:14:52,066 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:52,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:52,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:52,067 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:14:52,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:52,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:52,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:52,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:52,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup Group_testMultiTableMove_1484363491 2023-07-21 08:14:52,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1484363491 2023-07-21 08:14:52,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:52,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:52,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:52,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:52,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:52,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:52,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:37025] to rsgroup Group_testMultiTableMove_1484363491 2023-07-21 08:14:52,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1484363491 2023-07-21 08:14:52,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:52,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:52,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:52,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 08:14:52,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,37025,1689927277157] are moved back to default 2023-07-21 08:14:52,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1484363491 2023-07-21 08:14:52,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:52,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:52,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:52,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1484363491 2023-07-21 08:14:52,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:52,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$4(2112): Client=jenkins//172.31.10.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:14:52,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 08:14:52,097 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:14:52,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(700): Client=jenkins//172.31.10.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-21 08:14:52,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 08:14:52,099 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1484363491 2023-07-21 08:14:52,100 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:52,100 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:52,100 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:52,105 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:14:52,106 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d 2023-07-21 08:14:52,107 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d empty. 2023-07-21 08:14:52,108 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d 2023-07-21 08:14:52,108 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-21 08:14:52,128 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-21 08:14:52,130 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => f692b49177816b5802877837da5bad7d, NAME => 'GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:52,145 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:52,145 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing f692b49177816b5802877837da5bad7d, disabling compactions & flushes 2023-07-21 08:14:52,145 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:52,145 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:52,145 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. after waiting 0 ms 2023-07-21 08:14:52,145 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:52,145 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:52,145 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for f692b49177816b5802877837da5bad7d: 2023-07-21 08:14:52,148 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:14:52,149 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927292149"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927292149"}]},"ts":"1689927292149"} 2023-07-21 08:14:52,151 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:14:52,152 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:14:52,152 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927292152"}]},"ts":"1689927292152"} 2023-07-21 08:14:52,153 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-21 08:14:52,163 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:52,163 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:52,163 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:52,163 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:14:52,163 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:52,163 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f692b49177816b5802877837da5bad7d, ASSIGN}] 2023-07-21 08:14:52,166 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f692b49177816b5802877837da5bad7d, ASSIGN 2023-07-21 08:14:52,167 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f692b49177816b5802877837da5bad7d, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,38059,1689927281154; forceNewPlan=false, retain=false 2023-07-21 08:14:52,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 08:14:52,317 INFO [jenkins-hbase5:46585] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 08:14:52,319 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=f692b49177816b5802877837da5bad7d, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:52,319 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927292319"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927292319"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927292319"}]},"ts":"1689927292319"} 2023-07-21 08:14:52,321 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure f692b49177816b5802877837da5bad7d, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:52,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 08:14:52,478 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:52,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f692b49177816b5802877837da5bad7d, NAME => 'GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:52,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA f692b49177816b5802877837da5bad7d 2023-07-21 08:14:52,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:52,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for f692b49177816b5802877837da5bad7d 2023-07-21 08:14:52,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for f692b49177816b5802877837da5bad7d 2023-07-21 08:14:52,480 INFO [StoreOpener-f692b49177816b5802877837da5bad7d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f692b49177816b5802877837da5bad7d 2023-07-21 08:14:52,481 DEBUG [StoreOpener-f692b49177816b5802877837da5bad7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d/f 2023-07-21 08:14:52,482 DEBUG [StoreOpener-f692b49177816b5802877837da5bad7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d/f 2023-07-21 08:14:52,483 INFO [StoreOpener-f692b49177816b5802877837da5bad7d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f692b49177816b5802877837da5bad7d columnFamilyName f 2023-07-21 08:14:52,483 INFO [StoreOpener-f692b49177816b5802877837da5bad7d-1] regionserver.HStore(310): Store=f692b49177816b5802877837da5bad7d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:52,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d 2023-07-21 08:14:52,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d 2023-07-21 08:14:52,488 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for f692b49177816b5802877837da5bad7d 2023-07-21 08:14:52,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:52,491 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened f692b49177816b5802877837da5bad7d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10060897600, jitterRate=-0.06300589442253113}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:52,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for f692b49177816b5802877837da5bad7d: 2023-07-21 08:14:52,492 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d., pid=96, masterSystemTime=1689927292473 2023-07-21 08:14:52,494 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:52,494 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:52,494 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=f692b49177816b5802877837da5bad7d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:52,494 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927292494"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927292494"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927292494"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927292494"}]},"ts":"1689927292494"} 2023-07-21 08:14:52,498 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-21 08:14:52,498 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure f692b49177816b5802877837da5bad7d, server=jenkins-hbase5.apache.org,38059,1689927281154 in 175 msec 2023-07-21 08:14:52,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-21 08:14:52,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f692b49177816b5802877837da5bad7d, ASSIGN in 335 msec 2023-07-21 08:14:52,500 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:14:52,500 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927292500"}]},"ts":"1689927292500"} 2023-07-21 08:14:52,508 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-21 08:14:52,512 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:14:52,514 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 417 msec 2023-07-21 08:14:52,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 08:14:52,703 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-21 08:14:52,703 DEBUG [Listener at localhost/43961] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-21 08:14:52,703 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:52,708 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-21 08:14:52,708 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:52,708 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-21 08:14:52,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$4(2112): Client=jenkins//172.31.10.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:14:52,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 08:14:52,716 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:14:52,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(700): Client=jenkins//172.31.10.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-21 08:14:52,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 08:14:52,723 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1484363491 2023-07-21 08:14:52,724 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:52,725 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:52,726 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:52,732 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:14:52,736 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:52,737 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b empty. 2023-07-21 08:14:52,738 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:52,738 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-21 08:14:52,796 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-21 08:14:52,806 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 176e94f8d939a75eb3f05b413ab3478b, NAME => 'GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:52,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 08:14:53,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 08:14:53,294 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:53,294 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 176e94f8d939a75eb3f05b413ab3478b, disabling compactions & flushes 2023-07-21 08:14:53,294 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:53,294 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:53,294 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. after waiting 0 ms 2023-07-21 08:14:53,294 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:53,294 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:53,294 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 176e94f8d939a75eb3f05b413ab3478b: 2023-07-21 08:14:53,297 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:14:53,298 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927293298"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927293298"}]},"ts":"1689927293298"} 2023-07-21 08:14:53,300 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:14:53,301 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:14:53,302 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927293302"}]},"ts":"1689927293302"} 2023-07-21 08:14:53,303 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-21 08:14:53,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:53,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:53,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:53,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:14:53,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:53,308 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=176e94f8d939a75eb3f05b413ab3478b, ASSIGN}] 2023-07-21 08:14:53,311 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=176e94f8d939a75eb3f05b413ab3478b, ASSIGN 2023-07-21 08:14:53,311 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=176e94f8d939a75eb3f05b413ab3478b, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,38059,1689927281154; forceNewPlan=false, retain=false 2023-07-21 08:14:53,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 08:14:53,462 INFO [jenkins-hbase5:46585] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 08:14:53,463 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=176e94f8d939a75eb3f05b413ab3478b, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:53,463 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveA' 2023-07-21 08:14:53,463 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927293463"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927293463"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927293463"}]},"ts":"1689927293463"} 2023-07-21 08:14:53,465 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 176e94f8d939a75eb3f05b413ab3478b, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:53,624 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:53,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 176e94f8d939a75eb3f05b413ab3478b, NAME => 'GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:53,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:53,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:53,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:53,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:53,627 INFO [StoreOpener-176e94f8d939a75eb3f05b413ab3478b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:53,630 DEBUG [StoreOpener-176e94f8d939a75eb3f05b413ab3478b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b/f 2023-07-21 08:14:53,630 DEBUG [StoreOpener-176e94f8d939a75eb3f05b413ab3478b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b/f 2023-07-21 08:14:53,630 INFO [StoreOpener-176e94f8d939a75eb3f05b413ab3478b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 176e94f8d939a75eb3f05b413ab3478b columnFamilyName f 2023-07-21 08:14:53,631 INFO [StoreOpener-176e94f8d939a75eb3f05b413ab3478b-1] regionserver.HStore(310): Store=176e94f8d939a75eb3f05b413ab3478b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:53,632 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:53,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:53,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:53,639 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:53,640 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 176e94f8d939a75eb3f05b413ab3478b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11200852000, jitterRate=0.043160632252693176}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:53,640 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 176e94f8d939a75eb3f05b413ab3478b: 2023-07-21 08:14:53,641 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b., pid=99, masterSystemTime=1689927293618 2023-07-21 08:14:53,643 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:53,643 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:53,643 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=176e94f8d939a75eb3f05b413ab3478b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:53,644 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927293643"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927293643"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927293643"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927293643"}]},"ts":"1689927293643"} 2023-07-21 08:14:53,647 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-21 08:14:53,647 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 176e94f8d939a75eb3f05b413ab3478b, server=jenkins-hbase5.apache.org,38059,1689927281154 in 180 msec 2023-07-21 08:14:53,649 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-21 08:14:53,650 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=176e94f8d939a75eb3f05b413ab3478b, ASSIGN in 339 msec 2023-07-21 08:14:53,650 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:14:53,650 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927293650"}]},"ts":"1689927293650"} 2023-07-21 08:14:53,652 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-21 08:14:53,654 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:14:53,656 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 944 msec 2023-07-21 08:14:53,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 08:14:53,823 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-21 08:14:53,823 DEBUG [Listener at localhost/43961] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-21 08:14:53,823 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:53,826 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-21 08:14:53,827 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:53,827 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-21 08:14:53,827 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:53,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-21 08:14:53,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:14:53,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-21 08:14:53,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:14:53,839 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1484363491 2023-07-21 08:14:53,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1484363491 2023-07-21 08:14:53,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1484363491 2023-07-21 08:14:53,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:53,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:53,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:53,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1484363491 2023-07-21 08:14:53,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(345): Moving region 176e94f8d939a75eb3f05b413ab3478b to RSGroup Group_testMultiTableMove_1484363491 2023-07-21 08:14:53,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=176e94f8d939a75eb3f05b413ab3478b, REOPEN/MOVE 2023-07-21 08:14:53,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1484363491 2023-07-21 08:14:53,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(345): Moving region f692b49177816b5802877837da5bad7d to RSGroup Group_testMultiTableMove_1484363491 2023-07-21 08:14:53,852 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=176e94f8d939a75eb3f05b413ab3478b, REOPEN/MOVE 2023-07-21 08:14:53,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f692b49177816b5802877837da5bad7d, REOPEN/MOVE 2023-07-21 08:14:53,853 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=176e94f8d939a75eb3f05b413ab3478b, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:53,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1484363491, current retry=0 2023-07-21 08:14:53,854 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f692b49177816b5802877837da5bad7d, REOPEN/MOVE 2023-07-21 08:14:53,854 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927293853"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927293853"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927293853"}]},"ts":"1689927293853"} 2023-07-21 08:14:53,854 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=f692b49177816b5802877837da5bad7d, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:53,855 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927293854"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927293854"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927293854"}]},"ts":"1689927293854"} 2023-07-21 08:14:53,855 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 176e94f8d939a75eb3f05b413ab3478b, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:53,856 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure f692b49177816b5802877837da5bad7d, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:54,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close f692b49177816b5802877837da5bad7d 2023-07-21 08:14:54,009 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing f692b49177816b5802877837da5bad7d, disabling compactions & flushes 2023-07-21 08:14:54,010 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:54,010 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:54,010 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. after waiting 0 ms 2023-07-21 08:14:54,010 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:54,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:54,016 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:54,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for f692b49177816b5802877837da5bad7d: 2023-07-21 08:14:54,016 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding f692b49177816b5802877837da5bad7d move to jenkins-hbase5.apache.org,37025,1689927277157 record at close sequenceid=2 2023-07-21 08:14:54,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed f692b49177816b5802877837da5bad7d 2023-07-21 08:14:54,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:54,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 176e94f8d939a75eb3f05b413ab3478b, disabling compactions & flushes 2023-07-21 08:14:54,180 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:54,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:54,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. after waiting 0 ms 2023-07-21 08:14:54,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:54,182 INFO [AsyncFSWAL-0-hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData-prefix:jenkins-hbase5.apache.org,46585,1689927275104] wal.AbstractFSWAL(1141): Slow sync cost: 164 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40079,DS-eca07878-4005-417c-888b-ba108a64f751,DISK], DatanodeInfoWithStorage[127.0.0.1:46363,DS-7218fb58-378e-4e2f-9bc7-05456d8fc68e,DISK], DatanodeInfoWithStorage[127.0.0.1:40235,DS-0fc516ec-6407-40a6-988b-0877a18a36a1,DISK]] 2023-07-21 08:14:54,182 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=f692b49177816b5802877837da5bad7d, regionState=CLOSED 2023-07-21 08:14:54,182 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927294182"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927294182"}]},"ts":"1689927294182"} 2023-07-21 08:14:54,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:54,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:54,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 176e94f8d939a75eb3f05b413ab3478b: 2023-07-21 08:14:54,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding 176e94f8d939a75eb3f05b413ab3478b move to jenkins-hbase5.apache.org,37025,1689927277157 record at close sequenceid=2 2023-07-21 08:14:54,187 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-21 08:14:54,187 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure f692b49177816b5802877837da5bad7d, server=jenkins-hbase5.apache.org,38059,1689927281154 in 328 msec 2023-07-21 08:14:54,188 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f692b49177816b5802877837da5bad7d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,37025,1689927277157; forceNewPlan=false, retain=false 2023-07-21 08:14:54,189 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:54,189 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=176e94f8d939a75eb3f05b413ab3478b, regionState=CLOSED 2023-07-21 08:14:54,189 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927294189"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927294189"}]},"ts":"1689927294189"} 2023-07-21 08:14:54,193 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-21 08:14:54,193 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 176e94f8d939a75eb3f05b413ab3478b, server=jenkins-hbase5.apache.org,38059,1689927281154 in 336 msec 2023-07-21 08:14:54,194 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=176e94f8d939a75eb3f05b413ab3478b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,37025,1689927277157; forceNewPlan=false, retain=false 2023-07-21 08:14:54,339 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=176e94f8d939a75eb3f05b413ab3478b, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:54,339 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=f692b49177816b5802877837da5bad7d, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:54,339 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927294339"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927294339"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927294339"}]},"ts":"1689927294339"} 2023-07-21 08:14:54,339 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927294339"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927294339"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927294339"}]},"ts":"1689927294339"} 2023-07-21 08:14:54,341 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=100, state=RUNNABLE; OpenRegionProcedure 176e94f8d939a75eb3f05b413ab3478b, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:54,341 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=101, state=RUNNABLE; OpenRegionProcedure f692b49177816b5802877837da5bad7d, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:54,497 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:54,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 176e94f8d939a75eb3f05b413ab3478b, NAME => 'GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:54,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:54,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:54,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:54,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:54,499 INFO [StoreOpener-176e94f8d939a75eb3f05b413ab3478b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:54,500 DEBUG [StoreOpener-176e94f8d939a75eb3f05b413ab3478b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b/f 2023-07-21 08:14:54,500 DEBUG [StoreOpener-176e94f8d939a75eb3f05b413ab3478b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b/f 2023-07-21 08:14:54,500 INFO [StoreOpener-176e94f8d939a75eb3f05b413ab3478b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 176e94f8d939a75eb3f05b413ab3478b columnFamilyName f 2023-07-21 08:14:54,501 INFO [StoreOpener-176e94f8d939a75eb3f05b413ab3478b-1] regionserver.HStore(310): Store=176e94f8d939a75eb3f05b413ab3478b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:54,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:54,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:54,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:54,507 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 176e94f8d939a75eb3f05b413ab3478b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10847195520, jitterRate=0.010223805904388428}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:54,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 176e94f8d939a75eb3f05b413ab3478b: 2023-07-21 08:14:54,508 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b., pid=104, masterSystemTime=1689927294492 2023-07-21 08:14:54,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:54,510 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:54,510 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:54,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f692b49177816b5802877837da5bad7d, NAME => 'GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:54,510 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=176e94f8d939a75eb3f05b413ab3478b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:54,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA f692b49177816b5802877837da5bad7d 2023-07-21 08:14:54,510 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927294510"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927294510"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927294510"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927294510"}]},"ts":"1689927294510"} 2023-07-21 08:14:54,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:54,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for f692b49177816b5802877837da5bad7d 2023-07-21 08:14:54,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for f692b49177816b5802877837da5bad7d 2023-07-21 08:14:54,512 INFO [StoreOpener-f692b49177816b5802877837da5bad7d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f692b49177816b5802877837da5bad7d 2023-07-21 08:14:54,513 DEBUG [StoreOpener-f692b49177816b5802877837da5bad7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d/f 2023-07-21 08:14:54,513 DEBUG [StoreOpener-f692b49177816b5802877837da5bad7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d/f 2023-07-21 08:14:54,513 INFO [StoreOpener-f692b49177816b5802877837da5bad7d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f692b49177816b5802877837da5bad7d columnFamilyName f 2023-07-21 08:14:54,514 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=100 2023-07-21 08:14:54,514 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=100, state=SUCCESS; OpenRegionProcedure 176e94f8d939a75eb3f05b413ab3478b, server=jenkins-hbase5.apache.org,37025,1689927277157 in 171 msec 2023-07-21 08:14:54,514 INFO [StoreOpener-f692b49177816b5802877837da5bad7d-1] regionserver.HStore(310): Store=f692b49177816b5802877837da5bad7d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:54,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d 2023-07-21 08:14:54,515 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=176e94f8d939a75eb3f05b413ab3478b, REOPEN/MOVE in 664 msec 2023-07-21 08:14:54,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d 2023-07-21 08:14:54,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for f692b49177816b5802877837da5bad7d 2023-07-21 08:14:54,520 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened f692b49177816b5802877837da5bad7d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9457134240, jitterRate=-0.11923573911190033}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:54,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for f692b49177816b5802877837da5bad7d: 2023-07-21 08:14:54,521 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d., pid=105, masterSystemTime=1689927294492 2023-07-21 08:14:54,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:54,523 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:54,523 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=f692b49177816b5802877837da5bad7d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:54,523 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927294523"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927294523"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927294523"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927294523"}]},"ts":"1689927294523"} 2023-07-21 08:14:54,527 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=101 2023-07-21 08:14:54,527 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=101, state=SUCCESS; OpenRegionProcedure f692b49177816b5802877837da5bad7d, server=jenkins-hbase5.apache.org,37025,1689927277157 in 184 msec 2023-07-21 08:14:54,529 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f692b49177816b5802877837da5bad7d, REOPEN/MOVE in 675 msec 2023-07-21 08:14:54,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-21 08:14:54,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1484363491. 2023-07-21 08:14:54,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:54,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:54,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:54,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-21 08:14:54,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:14:54,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-21 08:14:54,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:14:54,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:54,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:54,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1484363491 2023-07-21 08:14:54,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:54,871 INFO [Listener at localhost/43961] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-21 08:14:54,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$11(2418): Client=jenkins//172.31.10.131 disable GrouptestMultiTableMoveA 2023-07-21 08:14:54,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 08:14:54,876 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927294876"}]},"ts":"1689927294876"} 2023-07-21 08:14:54,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-21 08:14:54,878 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-21 08:14:54,880 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-21 08:14:54,885 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f692b49177816b5802877837da5bad7d, UNASSIGN}] 2023-07-21 08:14:54,887 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f692b49177816b5802877837da5bad7d, UNASSIGN 2023-07-21 08:14:54,888 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=f692b49177816b5802877837da5bad7d, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:54,888 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927294888"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927294888"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927294888"}]},"ts":"1689927294888"} 2023-07-21 08:14:54,889 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure f692b49177816b5802877837da5bad7d, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:54,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-21 08:14:55,042 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close f692b49177816b5802877837da5bad7d 2023-07-21 08:14:55,043 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing f692b49177816b5802877837da5bad7d, disabling compactions & flushes 2023-07-21 08:14:55,043 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:55,043 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:55,043 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. after waiting 0 ms 2023-07-21 08:14:55,043 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:55,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 08:14:55,047 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d. 2023-07-21 08:14:55,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for f692b49177816b5802877837da5bad7d: 2023-07-21 08:14:55,049 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed f692b49177816b5802877837da5bad7d 2023-07-21 08:14:55,049 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=f692b49177816b5802877837da5bad7d, regionState=CLOSED 2023-07-21 08:14:55,049 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927295049"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927295049"}]},"ts":"1689927295049"} 2023-07-21 08:14:55,054 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-21 08:14:55,054 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure f692b49177816b5802877837da5bad7d, server=jenkins-hbase5.apache.org,37025,1689927277157 in 162 msec 2023-07-21 08:14:55,056 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-21 08:14:55,056 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=f692b49177816b5802877837da5bad7d, UNASSIGN in 173 msec 2023-07-21 08:14:55,057 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927295056"}]},"ts":"1689927295056"} 2023-07-21 08:14:55,058 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-21 08:14:55,060 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-21 08:14:55,062 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 190 msec 2023-07-21 08:14:55,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-21 08:14:55,179 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-21 08:14:55,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$5(2228): Client=jenkins//172.31.10.131 delete GrouptestMultiTableMoveA 2023-07-21 08:14:55,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 08:14:55,182 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 08:14:55,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1484363491' 2023-07-21 08:14:55,183 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 08:14:55,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1484363491 2023-07-21 08:14:55,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:55,187 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d 2023-07-21 08:14:55,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-21 08:14:55,189 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d/recovered.edits] 2023-07-21 08:14:55,195 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d/recovered.edits/7.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d/recovered.edits/7.seqid 2023-07-21 08:14:55,195 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveA/f692b49177816b5802877837da5bad7d 2023-07-21 08:14:55,196 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-21 08:14:55,198 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 08:14:55,202 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-21 08:14:55,204 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-21 08:14:55,205 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 08:14:55,205 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-21 08:14:55,205 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927295205"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:55,208 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 08:14:55,208 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f692b49177816b5802877837da5bad7d, NAME => 'GrouptestMultiTableMoveA,,1689927292094.f692b49177816b5802877837da5bad7d.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 08:14:55,208 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-21 08:14:55,209 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689927295209"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:55,210 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-21 08:14:55,213 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 08:14:55,214 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 33 msec 2023-07-21 08:14:55,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-21 08:14:55,290 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-21 08:14:55,290 INFO [Listener at localhost/43961] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-21 08:14:55,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$11(2418): Client=jenkins//172.31.10.131 disable GrouptestMultiTableMoveB 2023-07-21 08:14:55,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 08:14:55,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 08:14:55,295 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927295295"}]},"ts":"1689927295295"} 2023-07-21 08:14:55,296 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-21 08:14:55,298 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-21 08:14:55,299 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=176e94f8d939a75eb3f05b413ab3478b, UNASSIGN}] 2023-07-21 08:14:55,300 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=176e94f8d939a75eb3f05b413ab3478b, UNASSIGN 2023-07-21 08:14:55,301 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=176e94f8d939a75eb3f05b413ab3478b, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:14:55,301 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927295301"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927295301"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927295301"}]},"ts":"1689927295301"} 2023-07-21 08:14:55,302 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 176e94f8d939a75eb3f05b413ab3478b, server=jenkins-hbase5.apache.org,37025,1689927277157}] 2023-07-21 08:14:55,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 08:14:55,443 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 08:14:55,455 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:55,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 176e94f8d939a75eb3f05b413ab3478b, disabling compactions & flushes 2023-07-21 08:14:55,456 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:55,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:55,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. after waiting 0 ms 2023-07-21 08:14:55,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:55,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 08:14:55,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b. 2023-07-21 08:14:55,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 176e94f8d939a75eb3f05b413ab3478b: 2023-07-21 08:14:55,466 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:55,467 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=176e94f8d939a75eb3f05b413ab3478b, regionState=CLOSED 2023-07-21 08:14:55,467 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689927295467"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927295467"}]},"ts":"1689927295467"} 2023-07-21 08:14:55,470 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-21 08:14:55,470 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 176e94f8d939a75eb3f05b413ab3478b, server=jenkins-hbase5.apache.org,37025,1689927277157 in 166 msec 2023-07-21 08:14:55,472 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-21 08:14:55,472 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=176e94f8d939a75eb3f05b413ab3478b, UNASSIGN in 172 msec 2023-07-21 08:14:55,472 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927295472"}]},"ts":"1689927295472"} 2023-07-21 08:14:55,474 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-21 08:14:55,476 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-21 08:14:55,478 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 185 msec 2023-07-21 08:14:55,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 08:14:55,596 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-21 08:14:55,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$5(2228): Client=jenkins//172.31.10.131 delete GrouptestMultiTableMoveB 2023-07-21 08:14:55,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 08:14:55,600 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 08:14:55,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1484363491' 2023-07-21 08:14:55,601 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 08:14:55,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1484363491 2023-07-21 08:14:55,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:55,605 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:55,606 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b/recovered.edits] 2023-07-21 08:14:55,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-21 08:14:55,613 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b/recovered.edits/7.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b/recovered.edits/7.seqid 2023-07-21 08:14:55,614 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/GrouptestMultiTableMoveB/176e94f8d939a75eb3f05b413ab3478b 2023-07-21 08:14:55,614 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-21 08:14:55,616 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 08:14:55,618 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-21 08:14:55,620 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-21 08:14:55,621 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 08:14:55,621 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-21 08:14:55,621 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927295621"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:55,622 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 08:14:55,622 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 176e94f8d939a75eb3f05b413ab3478b, NAME => 'GrouptestMultiTableMoveB,,1689927292710.176e94f8d939a75eb3f05b413ab3478b.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 08:14:55,622 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-21 08:14:55,622 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689927295622"}]},"ts":"9223372036854775807"} 2023-07-21 08:14:55,623 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-21 08:14:55,625 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 08:14:55,626 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 28 msec 2023-07-21 08:14:55,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-21 08:14:55,712 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-21 08:14:55,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:55,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:55,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:55,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:37025] to rsgroup default 2023-07-21 08:14:55,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1484363491 2023-07-21 08:14:55,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:55,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1484363491, current retry=0 2023-07-21 08:14:55,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,37025,1689927277157] are moved back to Group_testMultiTableMove_1484363491 2023-07-21 08:14:55,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1484363491 => default 2023-07-21 08:14:55,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:55,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup Group_testMultiTableMove_1484363491 2023-07-21 08:14:55,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 08:14:55,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:55,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:55,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:55,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:55,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:14:55,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:55,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:14:55,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:14:55,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:55,739 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:14:55,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:14:55,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:14:55,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:55,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:14:55,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:55,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 510 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928495749, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:14:55,751 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:14:55,752 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:55,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,753 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:14:55,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:55,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:55,773 INFO [Listener at localhost/43961] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=513 (was 517), OpenFileDescriptor=807 (was 810), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=537 (was 537), ProcessCount=168 (was 166) - ProcessCount LEAK? -, AvailableMemoryMB=2897 (was 2959) 2023-07-21 08:14:55,773 WARN [Listener at localhost/43961] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-21 08:14:55,789 INFO [Listener at localhost/43961] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=513, OpenFileDescriptor=807, MaxFileDescriptor=60000, SystemLoadAverage=537, ProcessCount=168, AvailableMemoryMB=2896 2023-07-21 08:14:55,789 WARN [Listener at localhost/43961] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-21 08:14:55,789 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-21 08:14:55,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:55,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:55,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:55,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:14:55,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:55,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:14:55,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:14:55,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:55,808 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:14:55,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:14:55,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:14:55,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:55,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:14:55,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:55,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 538 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928495823, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:14:55,823 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:14:55,826 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:55,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,827 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:14:55,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:55,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:55,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:55,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:55,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup oldGroup 2023-07-21 08:14:55,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 08:14:55,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:55,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:55,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:37025] to rsgroup oldGroup 2023-07-21 08:14:55,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 08:14:55,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:55,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 08:14:55,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,37025,1689927277157, jenkins-hbase5.apache.org,38059,1689927281154] are moved back to default 2023-07-21 08:14:55,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-21 08:14:55,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:55,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-21 08:14:55,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:55,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-21 08:14:55,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:55,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:55,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:55,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup anotherRSGroup 2023-07-21 08:14:55,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 08:14:55,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 08:14:55,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 08:14:55,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:55,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40169] to rsgroup anotherRSGroup 2023-07-21 08:14:55,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 08:14:55,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 08:14:55,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 08:14:55,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 08:14:55,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,40169,1689927277346] are moved back to default 2023-07-21 08:14:55,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-21 08:14:55,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:55,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-21 08:14:55,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:55,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-21 08:14:55,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:55,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.10.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-21 08:14:55,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:55,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.10.131:57944 deadline: 1689928495891, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-21 08:14:55,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.10.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-21 08:14:55,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:55,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.10.131:57944 deadline: 1689928495893, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-21 08:14:55,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.10.131 rename rsgroup from default to newRSGroup2 2023-07-21 08:14:55,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:55,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.10.131:57944 deadline: 1689928495894, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-21 08:14:55,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.10.131 rename rsgroup from oldGroup to default 2023-07-21 08:14:55,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:55,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.10.131:57944 deadline: 1689928495895, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-21 08:14:55,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:55,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:55,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:55,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40169] to rsgroup default 2023-07-21 08:14:55,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 08:14:55,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 08:14:55,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 08:14:55,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-21 08:14:55,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,40169,1689927277346] are moved back to anotherRSGroup 2023-07-21 08:14:55,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-21 08:14:55,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:55,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup anotherRSGroup 2023-07-21 08:14:55,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 08:14:55,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 08:14:55,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:55,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:55,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:55,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:55,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:37025] to rsgroup default 2023-07-21 08:14:55,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 08:14:55,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:55,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-21 08:14:55,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,37025,1689927277157, jenkins-hbase5.apache.org,38059,1689927281154] are moved back to oldGroup 2023-07-21 08:14:55,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-21 08:14:55,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:55,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup oldGroup 2023-07-21 08:14:55,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 08:14:55,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:55,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:55,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:55,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:55,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:14:55,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:55,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:14:55,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:14:55,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:55,937 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:14:55,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:14:55,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:55,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:14:55,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:55,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:14:55,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:55,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 614 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928495946, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:14:55,947 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:14:55,949 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:55,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,950 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:14:55,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:55,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:55,968 INFO [Listener at localhost/43961] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=517 (was 513) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=786 (was 807), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=537 (was 537), ProcessCount=168 (was 168), AvailableMemoryMB=2914 (was 2896) - AvailableMemoryMB LEAK? - 2023-07-21 08:14:55,968 WARN [Listener at localhost/43961] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-21 08:14:55,985 INFO [Listener at localhost/43961] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=517, OpenFileDescriptor=786, MaxFileDescriptor=60000, SystemLoadAverage=537, ProcessCount=168, AvailableMemoryMB=2914 2023-07-21 08:14:55,986 WARN [Listener at localhost/43961] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-21 08:14:55,988 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-21 08:14:55,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:55,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:55,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:14:55,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:14:55,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:55,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:14:55,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:55,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:14:55,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:55,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:14:55,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:14:56,001 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:14:56,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:14:56,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:56,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:56,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:14:56,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:56,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:56,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:56,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:14:56,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:14:56,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 642 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928496011, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:14:56,012 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:14:56,013 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:56,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:56,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:56,014 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:14:56,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:56,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:56,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:56,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:56,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup oldgroup 2023-07-21 08:14:56,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 08:14:56,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:56,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:56,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:56,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:56,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:56,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:56,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:37025] to rsgroup oldgroup 2023-07-21 08:14:56,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 08:14:56,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:56,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:56,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:56,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 08:14:56,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,37025,1689927277157, jenkins-hbase5.apache.org,38059,1689927281154] are moved back to default 2023-07-21 08:14:56,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-21 08:14:56,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:56,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:56,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:56,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-21 08:14:56,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:56,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$4(2112): Client=jenkins//172.31.10.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:14:56,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-21 08:14:56,042 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:14:56,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(700): Client=jenkins//172.31.10.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-21 08:14:56,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-21 08:14:56,044 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 08:14:56,045 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:56,045 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:56,051 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:56,057 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:14:56,059 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:56,059 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0 empty. 2023-07-21 08:14:56,060 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:56,060 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-21 08:14:56,089 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-21 08:14:56,090 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 334ef3e6b2ee23b07963f9cbcdefd1e0, NAME => 'testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:56,106 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:56,106 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 334ef3e6b2ee23b07963f9cbcdefd1e0, disabling compactions & flushes 2023-07-21 08:14:56,106 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:56,106 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:56,106 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. after waiting 0 ms 2023-07-21 08:14:56,106 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:56,106 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:56,106 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 334ef3e6b2ee23b07963f9cbcdefd1e0: 2023-07-21 08:14:56,108 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:14:56,110 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689927296109"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927296109"}]},"ts":"1689927296109"} 2023-07-21 08:14:56,111 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:14:56,112 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:14:56,112 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927296112"}]},"ts":"1689927296112"} 2023-07-21 08:14:56,113 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-21 08:14:56,116 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:56,116 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:56,116 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:56,116 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:56,117 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=334ef3e6b2ee23b07963f9cbcdefd1e0, ASSIGN}] 2023-07-21 08:14:56,119 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=334ef3e6b2ee23b07963f9cbcdefd1e0, ASSIGN 2023-07-21 08:14:56,119 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=334ef3e6b2ee23b07963f9cbcdefd1e0, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40169,1689927277346; forceNewPlan=false, retain=false 2023-07-21 08:14:56,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-21 08:14:56,270 INFO [jenkins-hbase5:46585] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 08:14:56,271 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=334ef3e6b2ee23b07963f9cbcdefd1e0, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:56,271 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689927296271"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927296271"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927296271"}]},"ts":"1689927296271"} 2023-07-21 08:14:56,273 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure 334ef3e6b2ee23b07963f9cbcdefd1e0, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:14:56,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-21 08:14:56,428 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:56,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 334ef3e6b2ee23b07963f9cbcdefd1e0, NAME => 'testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:56,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:56,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:56,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:56,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:56,430 INFO [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:56,432 DEBUG [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0/tr 2023-07-21 08:14:56,432 DEBUG [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0/tr 2023-07-21 08:14:56,432 INFO [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 334ef3e6b2ee23b07963f9cbcdefd1e0 columnFamilyName tr 2023-07-21 08:14:56,433 INFO [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] regionserver.HStore(310): Store=334ef3e6b2ee23b07963f9cbcdefd1e0/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:56,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:56,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:56,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:56,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:56,440 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 334ef3e6b2ee23b07963f9cbcdefd1e0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10396615520, jitterRate=-0.0317397266626358}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:56,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 334ef3e6b2ee23b07963f9cbcdefd1e0: 2023-07-21 08:14:56,440 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0., pid=116, masterSystemTime=1689927296424 2023-07-21 08:14:56,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:56,442 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:56,442 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=334ef3e6b2ee23b07963f9cbcdefd1e0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:56,442 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689927296442"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927296442"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927296442"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927296442"}]},"ts":"1689927296442"} 2023-07-21 08:14:56,445 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-21 08:14:56,445 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure 334ef3e6b2ee23b07963f9cbcdefd1e0, server=jenkins-hbase5.apache.org,40169,1689927277346 in 171 msec 2023-07-21 08:14:56,452 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-21 08:14:56,453 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=334ef3e6b2ee23b07963f9cbcdefd1e0, ASSIGN in 328 msec 2023-07-21 08:14:56,453 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:14:56,453 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927296453"}]},"ts":"1689927296453"} 2023-07-21 08:14:56,454 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-21 08:14:56,456 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:14:56,459 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 417 msec 2023-07-21 08:14:56,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-21 08:14:56,647 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-21 08:14:56,647 DEBUG [Listener at localhost/43961] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-21 08:14:56,647 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:56,650 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-21 08:14:56,650 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:56,651 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-21 08:14:56,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [testRename] to rsgroup oldgroup 2023-07-21 08:14:56,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 08:14:56,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:56,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:56,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:14:56,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-21 08:14:56,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(345): Moving region 334ef3e6b2ee23b07963f9cbcdefd1e0 to RSGroup oldgroup 2023-07-21 08:14:56,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:14:56,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:14:56,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:14:56,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:14:56,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:14:56,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=334ef3e6b2ee23b07963f9cbcdefd1e0, REOPEN/MOVE 2023-07-21 08:14:56,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-21 08:14:56,659 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=334ef3e6b2ee23b07963f9cbcdefd1e0, REOPEN/MOVE 2023-07-21 08:14:56,659 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=334ef3e6b2ee23b07963f9cbcdefd1e0, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:56,659 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689927296659"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927296659"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927296659"}]},"ts":"1689927296659"} 2023-07-21 08:14:56,661 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 334ef3e6b2ee23b07963f9cbcdefd1e0, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:14:56,814 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:56,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 334ef3e6b2ee23b07963f9cbcdefd1e0, disabling compactions & flushes 2023-07-21 08:14:56,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:56,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:56,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. after waiting 0 ms 2023-07-21 08:14:56,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:56,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:56,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:56,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 334ef3e6b2ee23b07963f9cbcdefd1e0: 2023-07-21 08:14:56,821 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding 334ef3e6b2ee23b07963f9cbcdefd1e0 move to jenkins-hbase5.apache.org,38059,1689927281154 record at close sequenceid=2 2023-07-21 08:14:56,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:56,822 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=334ef3e6b2ee23b07963f9cbcdefd1e0, regionState=CLOSED 2023-07-21 08:14:56,822 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689927296822"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927296822"}]},"ts":"1689927296822"} 2023-07-21 08:14:56,825 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-21 08:14:56,825 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 334ef3e6b2ee23b07963f9cbcdefd1e0, server=jenkins-hbase5.apache.org,40169,1689927277346 in 163 msec 2023-07-21 08:14:56,826 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=334ef3e6b2ee23b07963f9cbcdefd1e0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,38059,1689927281154; forceNewPlan=false, retain=false 2023-07-21 08:14:56,976 INFO [jenkins-hbase5:46585] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 08:14:56,976 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=334ef3e6b2ee23b07963f9cbcdefd1e0, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:56,976 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689927296976"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927296976"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927296976"}]},"ts":"1689927296976"} 2023-07-21 08:14:56,978 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 334ef3e6b2ee23b07963f9cbcdefd1e0, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:14:57,134 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:57,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 334ef3e6b2ee23b07963f9cbcdefd1e0, NAME => 'testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:57,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:57,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:57,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:57,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:57,135 INFO [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:57,136 DEBUG [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0/tr 2023-07-21 08:14:57,136 DEBUG [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0/tr 2023-07-21 08:14:57,137 INFO [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 334ef3e6b2ee23b07963f9cbcdefd1e0 columnFamilyName tr 2023-07-21 08:14:57,137 INFO [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] regionserver.HStore(310): Store=334ef3e6b2ee23b07963f9cbcdefd1e0/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:57,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:57,139 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:57,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:14:57,142 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 334ef3e6b2ee23b07963f9cbcdefd1e0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9717032000, jitterRate=-0.09503087401390076}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:57,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 334ef3e6b2ee23b07963f9cbcdefd1e0: 2023-07-21 08:14:57,142 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0., pid=119, masterSystemTime=1689927297130 2023-07-21 08:14:57,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:57,143 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:14:57,144 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=334ef3e6b2ee23b07963f9cbcdefd1e0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:14:57,144 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689927297144"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927297144"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927297144"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927297144"}]},"ts":"1689927297144"} 2023-07-21 08:14:57,147 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-21 08:14:57,147 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 334ef3e6b2ee23b07963f9cbcdefd1e0, server=jenkins-hbase5.apache.org,38059,1689927281154 in 167 msec 2023-07-21 08:14:57,148 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=334ef3e6b2ee23b07963f9cbcdefd1e0, REOPEN/MOVE in 489 msec 2023-07-21 08:14:57,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-21 08:14:57,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-21 08:14:57,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:57,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:57,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:57,665 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:57,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=testRename 2023-07-21 08:14:57,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:14:57,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-21 08:14:57,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:57,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=testRename 2023-07-21 08:14:57,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:14:57,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:14:57,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:57,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup normal 2023-07-21 08:14:57,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 08:14:57,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 08:14:57,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:57,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:57,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 08:14:57,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:14:57,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:57,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:57,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40169] to rsgroup normal 2023-07-21 08:14:57,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 08:14:57,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 08:14:57,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:57,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:57,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 08:14:57,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 08:14:57,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,40169,1689927277346] are moved back to default 2023-07-21 08:14:57,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-21 08:14:57,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:14:57,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:57,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:57,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=normal 2023-07-21 08:14:57,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:57,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$4(2112): Client=jenkins//172.31.10.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:14:57,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-21 08:14:57,704 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:14:57,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(700): Client=jenkins//172.31.10.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-21 08:14:57,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-21 08:14:57,705 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 08:14:57,706 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 08:14:57,706 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:57,706 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:57,707 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 08:14:57,709 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:14:57,710 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:57,710 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667 empty. 2023-07-21 08:14:57,711 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:57,711 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-21 08:14:57,749 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-21 08:14:57,762 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 39a5cd412e322750126f98dab14c6667, NAME => 'unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:14:57,785 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:57,786 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 39a5cd412e322750126f98dab14c6667, disabling compactions & flushes 2023-07-21 08:14:57,786 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:57,786 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:57,786 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. after waiting 0 ms 2023-07-21 08:14:57,786 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:57,786 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:57,786 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 39a5cd412e322750126f98dab14c6667: 2023-07-21 08:14:57,790 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:14:57,791 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689927297791"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927297791"}]},"ts":"1689927297791"} 2023-07-21 08:14:57,792 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:14:57,795 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:14:57,796 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927297795"}]},"ts":"1689927297795"} 2023-07-21 08:14:57,797 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-21 08:14:57,802 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=39a5cd412e322750126f98dab14c6667, ASSIGN}] 2023-07-21 08:14:57,804 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=39a5cd412e322750126f98dab14c6667, ASSIGN 2023-07-21 08:14:57,805 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=39a5cd412e322750126f98dab14c6667, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40889,1689927276956; forceNewPlan=false, retain=false 2023-07-21 08:14:57,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-21 08:14:57,957 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=39a5cd412e322750126f98dab14c6667, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:57,957 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689927297957"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927297957"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927297957"}]},"ts":"1689927297957"} 2023-07-21 08:14:57,958 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 39a5cd412e322750126f98dab14c6667, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:58,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-21 08:14:58,114 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:58,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 39a5cd412e322750126f98dab14c6667, NAME => 'unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:58,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:58,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,116 INFO [StoreOpener-39a5cd412e322750126f98dab14c6667-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,117 DEBUG [StoreOpener-39a5cd412e322750126f98dab14c6667-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667/ut 2023-07-21 08:14:58,117 DEBUG [StoreOpener-39a5cd412e322750126f98dab14c6667-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667/ut 2023-07-21 08:14:58,118 INFO [StoreOpener-39a5cd412e322750126f98dab14c6667-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 39a5cd412e322750126f98dab14c6667 columnFamilyName ut 2023-07-21 08:14:58,118 INFO [StoreOpener-39a5cd412e322750126f98dab14c6667-1] regionserver.HStore(310): Store=39a5cd412e322750126f98dab14c6667/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:58,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:14:58,125 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 39a5cd412e322750126f98dab14c6667; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11691661600, jitterRate=0.08887083828449249}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:58,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 39a5cd412e322750126f98dab14c6667: 2023-07-21 08:14:58,126 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667., pid=122, masterSystemTime=1689927298110 2023-07-21 08:14:58,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:58,127 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:58,127 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=39a5cd412e322750126f98dab14c6667, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:58,127 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689927298127"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927298127"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927298127"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927298127"}]},"ts":"1689927298127"} 2023-07-21 08:14:58,130 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-21 08:14:58,130 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 39a5cd412e322750126f98dab14c6667, server=jenkins-hbase5.apache.org,40889,1689927276956 in 171 msec 2023-07-21 08:14:58,131 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-21 08:14:58,131 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=39a5cd412e322750126f98dab14c6667, ASSIGN in 328 msec 2023-07-21 08:14:58,132 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:14:58,132 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927298132"}]},"ts":"1689927298132"} 2023-07-21 08:14:58,133 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-21 08:14:58,135 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:14:58,136 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 434 msec 2023-07-21 08:14:58,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-21 08:14:58,309 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-21 08:14:58,309 DEBUG [Listener at localhost/43961] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-21 08:14:58,309 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:58,312 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-21 08:14:58,312 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:58,312 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-21 08:14:58,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [unmovedTable] to rsgroup normal 2023-07-21 08:14:58,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 08:14:58,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 08:14:58,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:58,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:58,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 08:14:58,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-21 08:14:58,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(345): Moving region 39a5cd412e322750126f98dab14c6667 to RSGroup normal 2023-07-21 08:14:58,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=39a5cd412e322750126f98dab14c6667, REOPEN/MOVE 2023-07-21 08:14:58,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-21 08:14:58,321 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=39a5cd412e322750126f98dab14c6667, REOPEN/MOVE 2023-07-21 08:14:58,321 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=39a5cd412e322750126f98dab14c6667, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:58,322 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689927298321"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927298321"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927298321"}]},"ts":"1689927298321"} 2023-07-21 08:14:58,323 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 39a5cd412e322750126f98dab14c6667, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:58,476 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 39a5cd412e322750126f98dab14c6667, disabling compactions & flushes 2023-07-21 08:14:58,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:58,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:58,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. after waiting 0 ms 2023-07-21 08:14:58,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:58,481 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:14:58,481 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:58,481 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 39a5cd412e322750126f98dab14c6667: 2023-07-21 08:14:58,481 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding 39a5cd412e322750126f98dab14c6667 move to jenkins-hbase5.apache.org,40169,1689927277346 record at close sequenceid=2 2023-07-21 08:14:58,483 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,483 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=39a5cd412e322750126f98dab14c6667, regionState=CLOSED 2023-07-21 08:14:58,484 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689927298483"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927298483"}]},"ts":"1689927298483"} 2023-07-21 08:14:58,487 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-21 08:14:58,487 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 39a5cd412e322750126f98dab14c6667, server=jenkins-hbase5.apache.org,40889,1689927276956 in 162 msec 2023-07-21 08:14:58,488 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=39a5cd412e322750126f98dab14c6667, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,40169,1689927277346; forceNewPlan=false, retain=false 2023-07-21 08:14:58,639 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=39a5cd412e322750126f98dab14c6667, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:58,639 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689927298639"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927298639"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927298639"}]},"ts":"1689927298639"} 2023-07-21 08:14:58,641 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 39a5cd412e322750126f98dab14c6667, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:14:58,796 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:58,796 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 39a5cd412e322750126f98dab14c6667, NAME => 'unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:58,797 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,797 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:58,797 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,797 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,798 INFO [StoreOpener-39a5cd412e322750126f98dab14c6667-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,799 DEBUG [StoreOpener-39a5cd412e322750126f98dab14c6667-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667/ut 2023-07-21 08:14:58,799 DEBUG [StoreOpener-39a5cd412e322750126f98dab14c6667-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667/ut 2023-07-21 08:14:58,799 INFO [StoreOpener-39a5cd412e322750126f98dab14c6667-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 39a5cd412e322750126f98dab14c6667 columnFamilyName ut 2023-07-21 08:14:58,800 INFO [StoreOpener-39a5cd412e322750126f98dab14c6667-1] regionserver.HStore(310): Store=39a5cd412e322750126f98dab14c6667/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:58,801 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,802 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,805 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:58,806 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 39a5cd412e322750126f98dab14c6667; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11185947520, jitterRate=0.041772544384002686}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:58,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 39a5cd412e322750126f98dab14c6667: 2023-07-21 08:14:58,806 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667., pid=125, masterSystemTime=1689927298793 2023-07-21 08:14:58,808 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:58,808 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:58,808 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=39a5cd412e322750126f98dab14c6667, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:58,808 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689927298808"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927298808"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927298808"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927298808"}]},"ts":"1689927298808"} 2023-07-21 08:14:58,811 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-21 08:14:58,811 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 39a5cd412e322750126f98dab14c6667, server=jenkins-hbase5.apache.org,40169,1689927277346 in 169 msec 2023-07-21 08:14:58,812 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=39a5cd412e322750126f98dab14c6667, REOPEN/MOVE in 491 msec 2023-07-21 08:14:59,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-21 08:14:59,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-21 08:14:59,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:14:59,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:59,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:59,329 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:14:59,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 08:14:59,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:14:59,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=normal 2023-07-21 08:14:59,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:59,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 08:14:59,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:14:59,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.10.131 rename rsgroup from oldgroup to newgroup 2023-07-21 08:14:59,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 08:14:59,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:59,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:59,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 08:14:59,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-21 08:14:59,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-21 08:14:59,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:59,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:59,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=newgroup 2023-07-21 08:14:59,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:14:59,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=testRename 2023-07-21 08:14:59,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:14:59,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 08:14:59,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:14:59,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:14:59,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:14:59,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [unmovedTable] to rsgroup default 2023-07-21 08:14:59,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 08:14:59,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:14:59,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:14:59,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 08:14:59,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 08:14:59,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-21 08:14:59,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(345): Moving region 39a5cd412e322750126f98dab14c6667 to RSGroup default 2023-07-21 08:14:59,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=39a5cd412e322750126f98dab14c6667, REOPEN/MOVE 2023-07-21 08:14:59,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 08:14:59,368 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=39a5cd412e322750126f98dab14c6667, REOPEN/MOVE 2023-07-21 08:14:59,369 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=39a5cd412e322750126f98dab14c6667, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:14:59,369 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689927299369"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927299369"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927299369"}]},"ts":"1689927299369"} 2023-07-21 08:14:59,370 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 39a5cd412e322750126f98dab14c6667, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:14:59,464 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-21 08:14:59,523 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:59,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 39a5cd412e322750126f98dab14c6667, disabling compactions & flushes 2023-07-21 08:14:59,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:59,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:59,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. after waiting 0 ms 2023-07-21 08:14:59,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:59,528 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 08:14:59,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:59,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 39a5cd412e322750126f98dab14c6667: 2023-07-21 08:14:59,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding 39a5cd412e322750126f98dab14c6667 move to jenkins-hbase5.apache.org,40889,1689927276956 record at close sequenceid=5 2023-07-21 08:14:59,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:59,531 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=39a5cd412e322750126f98dab14c6667, regionState=CLOSED 2023-07-21 08:14:59,531 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689927299531"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927299531"}]},"ts":"1689927299531"} 2023-07-21 08:14:59,533 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-21 08:14:59,534 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 39a5cd412e322750126f98dab14c6667, server=jenkins-hbase5.apache.org,40169,1689927277346 in 162 msec 2023-07-21 08:14:59,534 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=39a5cd412e322750126f98dab14c6667, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,40889,1689927276956; forceNewPlan=false, retain=false 2023-07-21 08:14:59,684 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=39a5cd412e322750126f98dab14c6667, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:59,685 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689927299684"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927299684"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927299684"}]},"ts":"1689927299684"} 2023-07-21 08:14:59,686 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 39a5cd412e322750126f98dab14c6667, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:14:59,841 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:59,841 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 39a5cd412e322750126f98dab14c6667, NAME => 'unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:14:59,842 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:59,842 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:14:59,842 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:59,842 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:59,843 INFO [StoreOpener-39a5cd412e322750126f98dab14c6667-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:59,844 DEBUG [StoreOpener-39a5cd412e322750126f98dab14c6667-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667/ut 2023-07-21 08:14:59,844 DEBUG [StoreOpener-39a5cd412e322750126f98dab14c6667-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667/ut 2023-07-21 08:14:59,845 INFO [StoreOpener-39a5cd412e322750126f98dab14c6667-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 39a5cd412e322750126f98dab14c6667 columnFamilyName ut 2023-07-21 08:14:59,845 INFO [StoreOpener-39a5cd412e322750126f98dab14c6667-1] regionserver.HStore(310): Store=39a5cd412e322750126f98dab14c6667/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:14:59,846 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:59,847 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:59,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 39a5cd412e322750126f98dab14c6667 2023-07-21 08:14:59,850 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 39a5cd412e322750126f98dab14c6667; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10250878560, jitterRate=-0.045312538743019104}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:14:59,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 39a5cd412e322750126f98dab14c6667: 2023-07-21 08:14:59,851 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667., pid=128, masterSystemTime=1689927299838 2023-07-21 08:14:59,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:59,852 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:14:59,852 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=39a5cd412e322750126f98dab14c6667, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:14:59,852 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689927299852"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927299852"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927299852"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927299852"}]},"ts":"1689927299852"} 2023-07-21 08:14:59,855 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-21 08:14:59,855 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 39a5cd412e322750126f98dab14c6667, server=jenkins-hbase5.apache.org,40889,1689927276956 in 167 msec 2023-07-21 08:14:59,856 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=39a5cd412e322750126f98dab14c6667, REOPEN/MOVE in 488 msec 2023-07-21 08:15:00,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-21 08:15:00,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-21 08:15:00,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:00,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40169] to rsgroup default 2023-07-21 08:15:00,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 08:15:00,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:00,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:00,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 08:15:00,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 08:15:00,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-21 08:15:00,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,40169,1689927277346] are moved back to normal 2023-07-21 08:15:00,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-21 08:15:00,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:00,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup normal 2023-07-21 08:15:00,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:00,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:00,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 08:15:00,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 08:15:00,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:00,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:00,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:00,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:00,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:00,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:00,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:00,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:00,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 08:15:00,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 08:15:00,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:00,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [testRename] to rsgroup default 2023-07-21 08:15:00,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:00,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 08:15:00,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:00,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-21 08:15:00,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(345): Moving region 334ef3e6b2ee23b07963f9cbcdefd1e0 to RSGroup default 2023-07-21 08:15:00,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=334ef3e6b2ee23b07963f9cbcdefd1e0, REOPEN/MOVE 2023-07-21 08:15:00,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 08:15:00,400 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=334ef3e6b2ee23b07963f9cbcdefd1e0, REOPEN/MOVE 2023-07-21 08:15:00,401 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=334ef3e6b2ee23b07963f9cbcdefd1e0, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:15:00,401 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689927300401"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927300401"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927300401"}]},"ts":"1689927300401"} 2023-07-21 08:15:00,402 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 334ef3e6b2ee23b07963f9cbcdefd1e0, server=jenkins-hbase5.apache.org,38059,1689927281154}] 2023-07-21 08:15:00,549 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 08:15:00,554 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:15:00,556 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 334ef3e6b2ee23b07963f9cbcdefd1e0, disabling compactions & flushes 2023-07-21 08:15:00,556 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:15:00,556 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:15:00,556 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. after waiting 0 ms 2023-07-21 08:15:00,556 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:15:00,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 08:15:00,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:15:00,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 334ef3e6b2ee23b07963f9cbcdefd1e0: 2023-07-21 08:15:00,572 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(3513): Adding 334ef3e6b2ee23b07963f9cbcdefd1e0 move to jenkins-hbase5.apache.org,40169,1689927277346 record at close sequenceid=5 2023-07-21 08:15:00,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:15:00,574 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=334ef3e6b2ee23b07963f9cbcdefd1e0, regionState=CLOSED 2023-07-21 08:15:00,574 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689927300574"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927300574"}]},"ts":"1689927300574"} 2023-07-21 08:15:00,577 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-21 08:15:00,577 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 334ef3e6b2ee23b07963f9cbcdefd1e0, server=jenkins-hbase5.apache.org,38059,1689927281154 in 173 msec 2023-07-21 08:15:00,577 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=334ef3e6b2ee23b07963f9cbcdefd1e0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase5.apache.org,40169,1689927277346; forceNewPlan=false, retain=false 2023-07-21 08:15:00,728 INFO [jenkins-hbase5:46585] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 08:15:00,728 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=334ef3e6b2ee23b07963f9cbcdefd1e0, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:15:00,728 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689927300728"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927300728"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927300728"}]},"ts":"1689927300728"} 2023-07-21 08:15:00,730 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 334ef3e6b2ee23b07963f9cbcdefd1e0, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:15:00,890 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:15:00,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 334ef3e6b2ee23b07963f9cbcdefd1e0, NAME => 'testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:15:00,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:15:00,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:00,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:15:00,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:15:00,892 INFO [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:15:00,893 DEBUG [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0/tr 2023-07-21 08:15:00,893 DEBUG [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0/tr 2023-07-21 08:15:00,894 INFO [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 334ef3e6b2ee23b07963f9cbcdefd1e0 columnFamilyName tr 2023-07-21 08:15:00,895 INFO [StoreOpener-334ef3e6b2ee23b07963f9cbcdefd1e0-1] regionserver.HStore(310): Store=334ef3e6b2ee23b07963f9cbcdefd1e0/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:00,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:15:00,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:15:00,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:15:00,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 334ef3e6b2ee23b07963f9cbcdefd1e0; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11674326560, jitterRate=0.08725638687610626}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:00,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 334ef3e6b2ee23b07963f9cbcdefd1e0: 2023-07-21 08:15:00,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0., pid=131, masterSystemTime=1689927300886 2023-07-21 08:15:00,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:15:00,906 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:15:00,906 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=334ef3e6b2ee23b07963f9cbcdefd1e0, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:15:00,907 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689927300906"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927300906"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927300906"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927300906"}]},"ts":"1689927300906"} 2023-07-21 08:15:00,910 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-21 08:15:00,910 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 334ef3e6b2ee23b07963f9cbcdefd1e0, server=jenkins-hbase5.apache.org,40169,1689927277346 in 178 msec 2023-07-21 08:15:00,911 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=334ef3e6b2ee23b07963f9cbcdefd1e0, REOPEN/MOVE in 510 msec 2023-07-21 08:15:01,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-21 08:15:01,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-21 08:15:01,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:01,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:37025] to rsgroup default 2023-07-21 08:15:01,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:01,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 08:15:01,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:01,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-21 08:15:01,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,37025,1689927277157, jenkins-hbase5.apache.org,38059,1689927281154] are moved back to newgroup 2023-07-21 08:15:01,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-21 08:15:01,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:01,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup newgroup 2023-07-21 08:15:01,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:01,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:01,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:01,419 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:01,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:01,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:01,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:01,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:01,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:01,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:01,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:01,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:15:01,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:01,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 762 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928501435, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:15:01,436 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:01,438 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:01,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:01,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:01,439 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:01,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:01,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:01,459 INFO [Listener at localhost/43961] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=508 (was 517), OpenFileDescriptor=784 (was 786), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=509 (was 537), ProcessCount=166 (was 168), AvailableMemoryMB=2773 (was 2914) 2023-07-21 08:15:01,459 WARN [Listener at localhost/43961] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-21 08:15:01,476 INFO [Listener at localhost/43961] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=508, OpenFileDescriptor=784, MaxFileDescriptor=60000, SystemLoadAverage=509, ProcessCount=166, AvailableMemoryMB=2773 2023-07-21 08:15:01,476 WARN [Listener at localhost/43961] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-21 08:15:01,476 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-21 08:15:01,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:01,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:01,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:01,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:01,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:01,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:01,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:01,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:01,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:01,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:01,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:01,491 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:01,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:01,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:01,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:01,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:01,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:01,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:01,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:01,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:15:01,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:01,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 790 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928501502, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:15:01,502 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:01,504 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:01,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:01,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:01,505 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:01,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:01,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:01,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-21 08:15:01,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:15:01,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-21 08:15:01,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-21 08:15:01,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=bogus 2023-07-21 08:15:01,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:01,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup bogus 2023-07-21 08:15:01,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:01,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.10.131:57944 deadline: 1689928501516, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-21 08:15:01,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [bogus:123] to rsgroup bogus 2023-07-21 08:15:01,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:01,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.10.131:57944 deadline: 1689928501518, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-21 08:15:01,521 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-21 08:15:01,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(492): Client=jenkins//172.31.10.131 set balanceSwitch=true 2023-07-21 08:15:01,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.10.131 balance rsgroup, group=bogus 2023-07-21 08:15:01,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:01,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 809 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.10.131:57944 deadline: 1689928501526, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-21 08:15:01,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:01,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:01,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:01,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:01,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:01,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:01,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:01,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:01,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:01,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:01,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:01,542 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:01,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:01,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:01,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:01,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:01,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:01,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:01,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:01,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:15:01,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:01,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 833 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928501552, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:15:01,555 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:01,557 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:01,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:01,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:01,558 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:01,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:01,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:01,578 INFO [Listener at localhost/43961] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=512 (was 508) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-28 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27cfc6ad-shared-pool-29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=784 (was 784), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=509 (was 509), ProcessCount=166 (was 166), AvailableMemoryMB=2774 (was 2773) - AvailableMemoryMB LEAK? - 2023-07-21 08:15:01,578 WARN [Listener at localhost/43961] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-21 08:15:01,595 INFO [Listener at localhost/43961] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=512, OpenFileDescriptor=784, MaxFileDescriptor=60000, SystemLoadAverage=509, ProcessCount=166, AvailableMemoryMB=2774 2023-07-21 08:15:01,596 WARN [Listener at localhost/43961] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-21 08:15:01,596 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-21 08:15:01,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:01,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:01,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:01,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:01,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:01,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:01,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:01,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:01,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:01,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:01,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:01,610 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:01,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:01,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:01,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:01,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:01,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:01,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:01,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:01,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:15:01,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:01,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 861 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928501621, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:15:01,622 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:01,624 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:01,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:01,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:01,625 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:01,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:01,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:01,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:01,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:01,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup Group_testDisabledTableMove_1498439959 2023-07-21 08:15:01,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:01,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:01,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1498439959 2023-07-21 08:15:01,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:15:01,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:01,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:01,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:01,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:37025] to rsgroup Group_testDisabledTableMove_1498439959 2023-07-21 08:15:01,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:01,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:01,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1498439959 2023-07-21 08:15:01,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:15:01,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 08:15:01,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,37025,1689927277157, jenkins-hbase5.apache.org,38059,1689927281154] are moved back to default 2023-07-21 08:15:01,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1498439959 2023-07-21 08:15:01,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:01,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:01,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:01,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1498439959 2023-07-21 08:15:01,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:01,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$4(2112): Client=jenkins//172.31.10.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:01,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-21 08:15:01,654 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:15:01,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(700): Client=jenkins//172.31.10.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-21 08:15:01,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-21 08:15:01,656 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:01,657 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:01,657 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1498439959 2023-07-21 08:15:01,657 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:15:01,659 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:15:01,663 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35 2023-07-21 08:15:01,663 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9 2023-07-21 08:15:01,663 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21 2023-07-21 08:15:01,663 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf 2023-07-21 08:15:01,663 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540 2023-07-21 08:15:01,664 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35 empty. 2023-07-21 08:15:01,664 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf empty. 2023-07-21 08:15:01,664 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540 empty. 2023-07-21 08:15:01,664 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21 empty. 2023-07-21 08:15:01,664 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9 empty. 2023-07-21 08:15:01,664 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35 2023-07-21 08:15:01,664 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540 2023-07-21 08:15:01,664 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21 2023-07-21 08:15:01,665 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf 2023-07-21 08:15:01,665 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9 2023-07-21 08:15:01,665 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-21 08:15:01,679 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-21 08:15:01,681 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 717cf3069431b06da6ceaed5211bdecf, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:15:01,681 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => c3138aebbdd89c5c7a334156a680df35, NAME => 'Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:15:01,681 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2f751d05e9bd2f39b04cb2b6e0c09540, NAME => 'Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:15:01,706 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:01,706 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 717cf3069431b06da6ceaed5211bdecf, disabling compactions & flushes 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing c3138aebbdd89c5c7a334156a680df35, disabling compactions & flushes 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:01,707 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf. 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 2f751d05e9bd2f39b04cb2b6e0c09540, disabling compactions & flushes 2023-07-21 08:15:01,707 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35. 2023-07-21 08:15:01,707 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540. 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540. 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf. 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540. after waiting 0 ms 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540. 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35. 2023-07-21 08:15:01,707 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540. 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35. after waiting 0 ms 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf. after waiting 0 ms 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35. 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 2f751d05e9bd2f39b04cb2b6e0c09540: 2023-07-21 08:15:01,707 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35. 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf. 2023-07-21 08:15:01,707 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for c3138aebbdd89c5c7a334156a680df35: 2023-07-21 08:15:01,708 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf. 2023-07-21 08:15:01,708 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 717cf3069431b06da6ceaed5211bdecf: 2023-07-21 08:15:01,708 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 1b3cd39302d5dd9087d8035b91bcbc21, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:15:01,708 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => d234f5ecc1220022cf9aa7fd46cfc2c9, NAME => 'Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp 2023-07-21 08:15:01,722 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:01,722 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 1b3cd39302d5dd9087d8035b91bcbc21, disabling compactions & flushes 2023-07-21 08:15:01,722 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:01,722 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21. 2023-07-21 08:15:01,722 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing d234f5ecc1220022cf9aa7fd46cfc2c9, disabling compactions & flushes 2023-07-21 08:15:01,722 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21. 2023-07-21 08:15:01,722 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9. 2023-07-21 08:15:01,722 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9. 2023-07-21 08:15:01,722 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21. after waiting 0 ms 2023-07-21 08:15:01,723 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9. after waiting 0 ms 2023-07-21 08:15:01,723 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21. 2023-07-21 08:15:01,723 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9. 2023-07-21 08:15:01,723 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21. 2023-07-21 08:15:01,723 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9. 2023-07-21 08:15:01,723 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 1b3cd39302d5dd9087d8035b91bcbc21: 2023-07-21 08:15:01,723 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for d234f5ecc1220022cf9aa7fd46cfc2c9: 2023-07-21 08:15:01,726 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:15:01,727 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927301727"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927301727"}]},"ts":"1689927301727"} 2023-07-21 08:15:01,727 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689927301727"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927301727"}]},"ts":"1689927301727"} 2023-07-21 08:15:01,727 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927301727"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927301727"}]},"ts":"1689927301727"} 2023-07-21 08:15:01,728 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927301727"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927301727"}]},"ts":"1689927301727"} 2023-07-21 08:15:01,728 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689927301727"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927301727"}]},"ts":"1689927301727"} 2023-07-21 08:15:01,730 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 08:15:01,731 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:15:01,731 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927301731"}]},"ts":"1689927301731"} 2023-07-21 08:15:01,732 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-21 08:15:01,740 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:15:01,740 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:15:01,740 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:15:01,740 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:15:01,740 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c3138aebbdd89c5c7a334156a680df35, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f751d05e9bd2f39b04cb2b6e0c09540, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=717cf3069431b06da6ceaed5211bdecf, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b3cd39302d5dd9087d8035b91bcbc21, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d234f5ecc1220022cf9aa7fd46cfc2c9, ASSIGN}] 2023-07-21 08:15:01,743 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b3cd39302d5dd9087d8035b91bcbc21, ASSIGN 2023-07-21 08:15:01,743 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f751d05e9bd2f39b04cb2b6e0c09540, ASSIGN 2023-07-21 08:15:01,743 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c3138aebbdd89c5c7a334156a680df35, ASSIGN 2023-07-21 08:15:01,744 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=717cf3069431b06da6ceaed5211bdecf, ASSIGN 2023-07-21 08:15:01,744 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b3cd39302d5dd9087d8035b91bcbc21, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40169,1689927277346; forceNewPlan=false, retain=false 2023-07-21 08:15:01,745 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f751d05e9bd2f39b04cb2b6e0c09540, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40889,1689927276956; forceNewPlan=false, retain=false 2023-07-21 08:15:01,745 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c3138aebbdd89c5c7a334156a680df35, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40169,1689927277346; forceNewPlan=false, retain=false 2023-07-21 08:15:01,745 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d234f5ecc1220022cf9aa7fd46cfc2c9, ASSIGN 2023-07-21 08:15:01,745 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=717cf3069431b06da6ceaed5211bdecf, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40889,1689927276956; forceNewPlan=false, retain=false 2023-07-21 08:15:01,746 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d234f5ecc1220022cf9aa7fd46cfc2c9, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40889,1689927276956; forceNewPlan=false, retain=false 2023-07-21 08:15:01,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-21 08:15:01,895 INFO [jenkins-hbase5:46585] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 08:15:01,898 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=1b3cd39302d5dd9087d8035b91bcbc21, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:15:01,898 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=c3138aebbdd89c5c7a334156a680df35, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:15:01,898 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927301898"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927301898"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927301898"}]},"ts":"1689927301898"} 2023-07-21 08:15:01,898 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=d234f5ecc1220022cf9aa7fd46cfc2c9, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:15:01,898 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=717cf3069431b06da6ceaed5211bdecf, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:15:01,899 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689927301898"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927301898"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927301898"}]},"ts":"1689927301898"} 2023-07-21 08:15:01,899 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927301898"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927301898"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927301898"}]},"ts":"1689927301898"} 2023-07-21 08:15:01,898 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=2f751d05e9bd2f39b04cb2b6e0c09540, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:15:01,899 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927301898"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927301898"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927301898"}]},"ts":"1689927301898"} 2023-07-21 08:15:01,898 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689927301898"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927301898"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927301898"}]},"ts":"1689927301898"} 2023-07-21 08:15:01,900 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=136, state=RUNNABLE; OpenRegionProcedure 1b3cd39302d5dd9087d8035b91bcbc21, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:15:01,901 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=137, state=RUNNABLE; OpenRegionProcedure d234f5ecc1220022cf9aa7fd46cfc2c9, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:15:01,901 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=135, state=RUNNABLE; OpenRegionProcedure 717cf3069431b06da6ceaed5211bdecf, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:15:01,902 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=134, state=RUNNABLE; OpenRegionProcedure 2f751d05e9bd2f39b04cb2b6e0c09540, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:15:01,904 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=133, state=RUNNABLE; OpenRegionProcedure c3138aebbdd89c5c7a334156a680df35, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:15:01,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-21 08:15:02,056 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21. 2023-07-21 08:15:02,056 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1b3cd39302d5dd9087d8035b91bcbc21, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 08:15:02,056 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9. 2023-07-21 08:15:02,056 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 1b3cd39302d5dd9087d8035b91bcbc21 2023-07-21 08:15:02,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:02,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d234f5ecc1220022cf9aa7fd46cfc2c9, NAME => 'Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 08:15:02,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 1b3cd39302d5dd9087d8035b91bcbc21 2023-07-21 08:15:02,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 1b3cd39302d5dd9087d8035b91bcbc21 2023-07-21 08:15:02,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove d234f5ecc1220022cf9aa7fd46cfc2c9 2023-07-21 08:15:02,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:02,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for d234f5ecc1220022cf9aa7fd46cfc2c9 2023-07-21 08:15:02,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for d234f5ecc1220022cf9aa7fd46cfc2c9 2023-07-21 08:15:02,058 INFO [StoreOpener-1b3cd39302d5dd9087d8035b91bcbc21-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1b3cd39302d5dd9087d8035b91bcbc21 2023-07-21 08:15:02,058 INFO [StoreOpener-d234f5ecc1220022cf9aa7fd46cfc2c9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d234f5ecc1220022cf9aa7fd46cfc2c9 2023-07-21 08:15:02,059 DEBUG [StoreOpener-1b3cd39302d5dd9087d8035b91bcbc21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21/f 2023-07-21 08:15:02,060 DEBUG [StoreOpener-d234f5ecc1220022cf9aa7fd46cfc2c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9/f 2023-07-21 08:15:02,060 DEBUG [StoreOpener-1b3cd39302d5dd9087d8035b91bcbc21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21/f 2023-07-21 08:15:02,060 DEBUG [StoreOpener-d234f5ecc1220022cf9aa7fd46cfc2c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9/f 2023-07-21 08:15:02,060 INFO [StoreOpener-1b3cd39302d5dd9087d8035b91bcbc21-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1b3cd39302d5dd9087d8035b91bcbc21 columnFamilyName f 2023-07-21 08:15:02,060 INFO [StoreOpener-d234f5ecc1220022cf9aa7fd46cfc2c9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d234f5ecc1220022cf9aa7fd46cfc2c9 columnFamilyName f 2023-07-21 08:15:02,061 INFO [StoreOpener-1b3cd39302d5dd9087d8035b91bcbc21-1] regionserver.HStore(310): Store=1b3cd39302d5dd9087d8035b91bcbc21/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:02,061 INFO [StoreOpener-d234f5ecc1220022cf9aa7fd46cfc2c9-1] regionserver.HStore(310): Store=d234f5ecc1220022cf9aa7fd46cfc2c9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:02,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9 2023-07-21 08:15:02,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21 2023-07-21 08:15:02,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9 2023-07-21 08:15:02,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21 2023-07-21 08:15:02,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 1b3cd39302d5dd9087d8035b91bcbc21 2023-07-21 08:15:02,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for d234f5ecc1220022cf9aa7fd46cfc2c9 2023-07-21 08:15:02,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:02,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:02,067 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 1b3cd39302d5dd9087d8035b91bcbc21; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10860974880, jitterRate=0.011507108807563782}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:02,067 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened d234f5ecc1220022cf9aa7fd46cfc2c9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11277436480, jitterRate=0.05029311776161194}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:02,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 1b3cd39302d5dd9087d8035b91bcbc21: 2023-07-21 08:15:02,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for d234f5ecc1220022cf9aa7fd46cfc2c9: 2023-07-21 08:15:02,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9., pid=139, masterSystemTime=1689927302053 2023-07-21 08:15:02,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21., pid=138, masterSystemTime=1689927302052 2023-07-21 08:15:02,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21. 2023-07-21 08:15:02,070 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21. 2023-07-21 08:15:02,070 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35. 2023-07-21 08:15:02,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c3138aebbdd89c5c7a334156a680df35, NAME => 'Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 08:15:02,070 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=1b3cd39302d5dd9087d8035b91bcbc21, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:15:02,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove c3138aebbdd89c5c7a334156a680df35 2023-07-21 08:15:02,070 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927302070"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927302070"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927302070"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927302070"}]},"ts":"1689927302070"} 2023-07-21 08:15:02,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:02,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9. 2023-07-21 08:15:02,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for c3138aebbdd89c5c7a334156a680df35 2023-07-21 08:15:02,071 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9. 2023-07-21 08:15:02,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for c3138aebbdd89c5c7a334156a680df35 2023-07-21 08:15:02,071 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540. 2023-07-21 08:15:02,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2f751d05e9bd2f39b04cb2b6e0c09540, NAME => 'Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 08:15:02,071 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=d234f5ecc1220022cf9aa7fd46cfc2c9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:15:02,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 2f751d05e9bd2f39b04cb2b6e0c09540 2023-07-21 08:15:02,071 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689927302071"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927302071"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927302071"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927302071"}]},"ts":"1689927302071"} 2023-07-21 08:15:02,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:02,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 2f751d05e9bd2f39b04cb2b6e0c09540 2023-07-21 08:15:02,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 2f751d05e9bd2f39b04cb2b6e0c09540 2023-07-21 08:15:02,072 INFO [StoreOpener-c3138aebbdd89c5c7a334156a680df35-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c3138aebbdd89c5c7a334156a680df35 2023-07-21 08:15:02,073 INFO [StoreOpener-2f751d05e9bd2f39b04cb2b6e0c09540-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2f751d05e9bd2f39b04cb2b6e0c09540 2023-07-21 08:15:02,074 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=136 2023-07-21 08:15:02,074 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=136, state=SUCCESS; OpenRegionProcedure 1b3cd39302d5dd9087d8035b91bcbc21, server=jenkins-hbase5.apache.org,40169,1689927277346 in 172 msec 2023-07-21 08:15:02,074 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=137 2023-07-21 08:15:02,074 DEBUG [StoreOpener-c3138aebbdd89c5c7a334156a680df35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35/f 2023-07-21 08:15:02,074 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=137, state=SUCCESS; OpenRegionProcedure d234f5ecc1220022cf9aa7fd46cfc2c9, server=jenkins-hbase5.apache.org,40889,1689927276956 in 172 msec 2023-07-21 08:15:02,075 DEBUG [StoreOpener-c3138aebbdd89c5c7a334156a680df35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35/f 2023-07-21 08:15:02,075 DEBUG [StoreOpener-2f751d05e9bd2f39b04cb2b6e0c09540-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540/f 2023-07-21 08:15:02,075 DEBUG [StoreOpener-2f751d05e9bd2f39b04cb2b6e0c09540-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540/f 2023-07-21 08:15:02,075 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b3cd39302d5dd9087d8035b91bcbc21, ASSIGN in 334 msec 2023-07-21 08:15:02,075 INFO [StoreOpener-c3138aebbdd89c5c7a334156a680df35-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c3138aebbdd89c5c7a334156a680df35 columnFamilyName f 2023-07-21 08:15:02,075 INFO [StoreOpener-2f751d05e9bd2f39b04cb2b6e0c09540-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2f751d05e9bd2f39b04cb2b6e0c09540 columnFamilyName f 2023-07-21 08:15:02,075 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d234f5ecc1220022cf9aa7fd46cfc2c9, ASSIGN in 334 msec 2023-07-21 08:15:02,076 INFO [StoreOpener-c3138aebbdd89c5c7a334156a680df35-1] regionserver.HStore(310): Store=c3138aebbdd89c5c7a334156a680df35/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:02,076 INFO [StoreOpener-2f751d05e9bd2f39b04cb2b6e0c09540-1] regionserver.HStore(310): Store=2f751d05e9bd2f39b04cb2b6e0c09540/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:02,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35 2023-07-21 08:15:02,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540 2023-07-21 08:15:02,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35 2023-07-21 08:15:02,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540 2023-07-21 08:15:02,080 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 2f751d05e9bd2f39b04cb2b6e0c09540 2023-07-21 08:15:02,080 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for c3138aebbdd89c5c7a334156a680df35 2023-07-21 08:15:02,082 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:02,082 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:02,083 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 2f751d05e9bd2f39b04cb2b6e0c09540; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9767733280, jitterRate=-0.09030894935131073}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:02,083 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 2f751d05e9bd2f39b04cb2b6e0c09540: 2023-07-21 08:15:02,083 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened c3138aebbdd89c5c7a334156a680df35; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10410843360, jitterRate=-0.030414655804634094}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:02,083 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for c3138aebbdd89c5c7a334156a680df35: 2023-07-21 08:15:02,083 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540., pid=141, masterSystemTime=1689927302053 2023-07-21 08:15:02,083 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35., pid=142, masterSystemTime=1689927302052 2023-07-21 08:15:02,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540. 2023-07-21 08:15:02,085 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540. 2023-07-21 08:15:02,085 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf. 2023-07-21 08:15:02,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 717cf3069431b06da6ceaed5211bdecf, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 08:15:02,085 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=2f751d05e9bd2f39b04cb2b6e0c09540, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:15:02,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 717cf3069431b06da6ceaed5211bdecf 2023-07-21 08:15:02,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35. 2023-07-21 08:15:02,085 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927302085"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927302085"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927302085"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927302085"}]},"ts":"1689927302085"} 2023-07-21 08:15:02,085 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35. 2023-07-21 08:15:02,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:02,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 717cf3069431b06da6ceaed5211bdecf 2023-07-21 08:15:02,086 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=c3138aebbdd89c5c7a334156a680df35, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:15:02,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 717cf3069431b06da6ceaed5211bdecf 2023-07-21 08:15:02,086 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689927302086"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927302086"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927302086"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927302086"}]},"ts":"1689927302086"} 2023-07-21 08:15:02,088 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=134 2023-07-21 08:15:02,088 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=134, state=SUCCESS; OpenRegionProcedure 2f751d05e9bd2f39b04cb2b6e0c09540, server=jenkins-hbase5.apache.org,40889,1689927276956 in 185 msec 2023-07-21 08:15:02,089 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=133 2023-07-21 08:15:02,089 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=133, state=SUCCESS; OpenRegionProcedure c3138aebbdd89c5c7a334156a680df35, server=jenkins-hbase5.apache.org,40169,1689927277346 in 183 msec 2023-07-21 08:15:02,089 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f751d05e9bd2f39b04cb2b6e0c09540, ASSIGN in 348 msec 2023-07-21 08:15:02,090 INFO [StoreOpener-717cf3069431b06da6ceaed5211bdecf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 717cf3069431b06da6ceaed5211bdecf 2023-07-21 08:15:02,090 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c3138aebbdd89c5c7a334156a680df35, ASSIGN in 349 msec 2023-07-21 08:15:02,091 DEBUG [StoreOpener-717cf3069431b06da6ceaed5211bdecf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf/f 2023-07-21 08:15:02,091 DEBUG [StoreOpener-717cf3069431b06da6ceaed5211bdecf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf/f 2023-07-21 08:15:02,092 INFO [StoreOpener-717cf3069431b06da6ceaed5211bdecf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 717cf3069431b06da6ceaed5211bdecf columnFamilyName f 2023-07-21 08:15:02,092 INFO [StoreOpener-717cf3069431b06da6ceaed5211bdecf-1] regionserver.HStore(310): Store=717cf3069431b06da6ceaed5211bdecf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:02,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf 2023-07-21 08:15:02,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf 2023-07-21 08:15:02,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 717cf3069431b06da6ceaed5211bdecf 2023-07-21 08:15:02,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:02,098 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 717cf3069431b06da6ceaed5211bdecf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10377956160, jitterRate=-0.03347751498222351}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:02,098 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 717cf3069431b06da6ceaed5211bdecf: 2023-07-21 08:15:02,098 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf., pid=140, masterSystemTime=1689927302053 2023-07-21 08:15:02,099 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf. 2023-07-21 08:15:02,100 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf. 2023-07-21 08:15:02,100 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=717cf3069431b06da6ceaed5211bdecf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:15:02,100 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927302100"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927302100"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927302100"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927302100"}]},"ts":"1689927302100"} 2023-07-21 08:15:02,102 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=135 2023-07-21 08:15:02,102 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; OpenRegionProcedure 717cf3069431b06da6ceaed5211bdecf, server=jenkins-hbase5.apache.org,40889,1689927276956 in 200 msec 2023-07-21 08:15:02,104 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=132 2023-07-21 08:15:02,104 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=717cf3069431b06da6ceaed5211bdecf, ASSIGN in 362 msec 2023-07-21 08:15:02,104 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:15:02,104 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927302104"}]},"ts":"1689927302104"} 2023-07-21 08:15:02,105 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-21 08:15:02,107 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:15:02,109 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 455 msec 2023-07-21 08:15:02,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-21 08:15:02,260 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-21 08:15:02,260 DEBUG [Listener at localhost/43961] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-21 08:15:02,260 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:02,264 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-21 08:15:02,264 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:02,264 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-21 08:15:02,265 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:02,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-21 08:15:02,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:15:02,271 INFO [Listener at localhost/43961] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-21 08:15:02,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$11(2418): Client=jenkins//172.31.10.131 disable Group_testDisabledTableMove 2023-07-21 08:15:02,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-21 08:15:02,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-21 08:15:02,275 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927302275"}]},"ts":"1689927302275"} 2023-07-21 08:15:02,276 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-21 08:15:02,278 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-21 08:15:02,279 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c3138aebbdd89c5c7a334156a680df35, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f751d05e9bd2f39b04cb2b6e0c09540, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=717cf3069431b06da6ceaed5211bdecf, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b3cd39302d5dd9087d8035b91bcbc21, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d234f5ecc1220022cf9aa7fd46cfc2c9, UNASSIGN}] 2023-07-21 08:15:02,280 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b3cd39302d5dd9087d8035b91bcbc21, UNASSIGN 2023-07-21 08:15:02,280 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f751d05e9bd2f39b04cb2b6e0c09540, UNASSIGN 2023-07-21 08:15:02,280 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d234f5ecc1220022cf9aa7fd46cfc2c9, UNASSIGN 2023-07-21 08:15:02,281 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=717cf3069431b06da6ceaed5211bdecf, UNASSIGN 2023-07-21 08:15:02,281 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c3138aebbdd89c5c7a334156a680df35, UNASSIGN 2023-07-21 08:15:02,281 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=1b3cd39302d5dd9087d8035b91bcbc21, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:15:02,281 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=2f751d05e9bd2f39b04cb2b6e0c09540, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:15:02,281 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927302281"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927302281"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927302281"}]},"ts":"1689927302281"} 2023-07-21 08:15:02,281 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927302281"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927302281"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927302281"}]},"ts":"1689927302281"} 2023-07-21 08:15:02,282 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=d234f5ecc1220022cf9aa7fd46cfc2c9, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:15:02,282 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=717cf3069431b06da6ceaed5211bdecf, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:15:02,282 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689927302282"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927302282"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927302282"}]},"ts":"1689927302282"} 2023-07-21 08:15:02,282 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927302282"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927302282"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927302282"}]},"ts":"1689927302282"} 2023-07-21 08:15:02,282 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=c3138aebbdd89c5c7a334156a680df35, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:15:02,282 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689927302282"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927302282"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927302282"}]},"ts":"1689927302282"} 2023-07-21 08:15:02,282 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=147, state=RUNNABLE; CloseRegionProcedure 1b3cd39302d5dd9087d8035b91bcbc21, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:15:02,283 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=145, state=RUNNABLE; CloseRegionProcedure 2f751d05e9bd2f39b04cb2b6e0c09540, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:15:02,284 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=148, state=RUNNABLE; CloseRegionProcedure d234f5ecc1220022cf9aa7fd46cfc2c9, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:15:02,285 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=146, state=RUNNABLE; CloseRegionProcedure 717cf3069431b06da6ceaed5211bdecf, server=jenkins-hbase5.apache.org,40889,1689927276956}] 2023-07-21 08:15:02,285 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=144, state=RUNNABLE; CloseRegionProcedure c3138aebbdd89c5c7a334156a680df35, server=jenkins-hbase5.apache.org,40169,1689927277346}] 2023-07-21 08:15:02,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-21 08:15:02,434 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close d234f5ecc1220022cf9aa7fd46cfc2c9 2023-07-21 08:15:02,434 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 1b3cd39302d5dd9087d8035b91bcbc21 2023-07-21 08:15:02,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing d234f5ecc1220022cf9aa7fd46cfc2c9, disabling compactions & flushes 2023-07-21 08:15:02,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 1b3cd39302d5dd9087d8035b91bcbc21, disabling compactions & flushes 2023-07-21 08:15:02,436 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9. 2023-07-21 08:15:02,436 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21. 2023-07-21 08:15:02,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21. 2023-07-21 08:15:02,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9. 2023-07-21 08:15:02,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21. after waiting 0 ms 2023-07-21 08:15:02,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21. 2023-07-21 08:15:02,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9. after waiting 0 ms 2023-07-21 08:15:02,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9. 2023-07-21 08:15:02,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:15:02,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:15:02,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21. 2023-07-21 08:15:02,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 1b3cd39302d5dd9087d8035b91bcbc21: 2023-07-21 08:15:02,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9. 2023-07-21 08:15:02,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for d234f5ecc1220022cf9aa7fd46cfc2c9: 2023-07-21 08:15:02,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 1b3cd39302d5dd9087d8035b91bcbc21 2023-07-21 08:15:02,443 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close c3138aebbdd89c5c7a334156a680df35 2023-07-21 08:15:02,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing c3138aebbdd89c5c7a334156a680df35, disabling compactions & flushes 2023-07-21 08:15:02,443 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35. 2023-07-21 08:15:02,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35. 2023-07-21 08:15:02,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35. after waiting 0 ms 2023-07-21 08:15:02,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35. 2023-07-21 08:15:02,444 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=1b3cd39302d5dd9087d8035b91bcbc21, regionState=CLOSED 2023-07-21 08:15:02,444 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927302443"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927302443"}]},"ts":"1689927302443"} 2023-07-21 08:15:02,444 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed d234f5ecc1220022cf9aa7fd46cfc2c9 2023-07-21 08:15:02,444 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 717cf3069431b06da6ceaed5211bdecf 2023-07-21 08:15:02,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 717cf3069431b06da6ceaed5211bdecf, disabling compactions & flushes 2023-07-21 08:15:02,445 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf. 2023-07-21 08:15:02,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf. 2023-07-21 08:15:02,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf. after waiting 0 ms 2023-07-21 08:15:02,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf. 2023-07-21 08:15:02,445 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=d234f5ecc1220022cf9aa7fd46cfc2c9, regionState=CLOSED 2023-07-21 08:15:02,445 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689927302445"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927302445"}]},"ts":"1689927302445"} 2023-07-21 08:15:02,448 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=148 2023-07-21 08:15:02,448 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=147 2023-07-21 08:15:02,448 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=148, state=SUCCESS; CloseRegionProcedure d234f5ecc1220022cf9aa7fd46cfc2c9, server=jenkins-hbase5.apache.org,40889,1689927276956 in 162 msec 2023-07-21 08:15:02,448 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=147, state=SUCCESS; CloseRegionProcedure 1b3cd39302d5dd9087d8035b91bcbc21, server=jenkins-hbase5.apache.org,40169,1689927277346 in 164 msec 2023-07-21 08:15:02,449 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:15:02,450 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35. 2023-07-21 08:15:02,450 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for c3138aebbdd89c5c7a334156a680df35: 2023-07-21 08:15:02,450 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d234f5ecc1220022cf9aa7fd46cfc2c9, UNASSIGN in 169 msec 2023-07-21 08:15:02,450 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b3cd39302d5dd9087d8035b91bcbc21, UNASSIGN in 169 msec 2023-07-21 08:15:02,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:15:02,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf. 2023-07-21 08:15:02,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 717cf3069431b06da6ceaed5211bdecf: 2023-07-21 08:15:02,453 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed c3138aebbdd89c5c7a334156a680df35 2023-07-21 08:15:02,453 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=c3138aebbdd89c5c7a334156a680df35, regionState=CLOSED 2023-07-21 08:15:02,454 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689927302453"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927302453"}]},"ts":"1689927302453"} 2023-07-21 08:15:02,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 717cf3069431b06da6ceaed5211bdecf 2023-07-21 08:15:02,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close 2f751d05e9bd2f39b04cb2b6e0c09540 2023-07-21 08:15:02,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 2f751d05e9bd2f39b04cb2b6e0c09540, disabling compactions & flushes 2023-07-21 08:15:02,455 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540. 2023-07-21 08:15:02,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540. 2023-07-21 08:15:02,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540. after waiting 0 ms 2023-07-21 08:15:02,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540. 2023-07-21 08:15:02,457 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=717cf3069431b06da6ceaed5211bdecf, regionState=CLOSED 2023-07-21 08:15:02,457 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927302457"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927302457"}]},"ts":"1689927302457"} 2023-07-21 08:15:02,458 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=144 2023-07-21 08:15:02,459 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=144, state=SUCCESS; CloseRegionProcedure c3138aebbdd89c5c7a334156a680df35, server=jenkins-hbase5.apache.org,40169,1689927277346 in 171 msec 2023-07-21 08:15:02,459 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:15:02,460 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540. 2023-07-21 08:15:02,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 2f751d05e9bd2f39b04cb2b6e0c09540: 2023-07-21 08:15:02,461 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c3138aebbdd89c5c7a334156a680df35, UNASSIGN in 180 msec 2023-07-21 08:15:02,463 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=146 2023-07-21 08:15:02,463 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=146, state=SUCCESS; CloseRegionProcedure 717cf3069431b06da6ceaed5211bdecf, server=jenkins-hbase5.apache.org,40889,1689927276956 in 174 msec 2023-07-21 08:15:02,463 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed 2f751d05e9bd2f39b04cb2b6e0c09540 2023-07-21 08:15:02,463 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=2f751d05e9bd2f39b04cb2b6e0c09540, regionState=CLOSED 2023-07-21 08:15:02,464 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689927302463"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927302463"}]},"ts":"1689927302463"} 2023-07-21 08:15:02,464 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=717cf3069431b06da6ceaed5211bdecf, UNASSIGN in 184 msec 2023-07-21 08:15:02,466 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=145 2023-07-21 08:15:02,466 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=145, state=SUCCESS; CloseRegionProcedure 2f751d05e9bd2f39b04cb2b6e0c09540, server=jenkins-hbase5.apache.org,40889,1689927276956 in 182 msec 2023-07-21 08:15:02,468 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=143 2023-07-21 08:15:02,468 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f751d05e9bd2f39b04cb2b6e0c09540, UNASSIGN in 187 msec 2023-07-21 08:15:02,469 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927302468"}]},"ts":"1689927302468"} 2023-07-21 08:15:02,469 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-21 08:15:02,472 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-21 08:15:02,474 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 201 msec 2023-07-21 08:15:02,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-21 08:15:02,577 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-21 08:15:02,578 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1498439959 2023-07-21 08:15:02,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1498439959 2023-07-21 08:15:02,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:02,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:02,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1498439959 2023-07-21 08:15:02,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:15:02,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-21 08:15:02,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1498439959, current retry=0 2023-07-21 08:15:02,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1498439959. 2023-07-21 08:15:02,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:02,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:02,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:02,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-21 08:15:02,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:15:02,600 INFO [Listener at localhost/43961] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-21 08:15:02,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$11(2418): Client=jenkins//172.31.10.131 disable Group_testDisabledTableMove 2023-07-21 08:15:02,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:02,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 921 service: MasterService methodName: DisableTable size: 88 connection: 172.31.10.131:57944 deadline: 1689927362600, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-21 08:15:02,601 DEBUG [Listener at localhost/43961] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-21 08:15:02,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.HMaster$5(2228): Client=jenkins//172.31.10.131 delete Group_testDisabledTableMove 2023-07-21 08:15:02,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 08:15:02,605 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 08:15:02,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1498439959' 2023-07-21 08:15:02,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:02,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:02,608 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 08:15:02,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1498439959 2023-07-21 08:15:02,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:15:02,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-21 08:15:02,617 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35 2023-07-21 08:15:02,617 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540 2023-07-21 08:15:02,617 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9 2023-07-21 08:15:02,617 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21 2023-07-21 08:15:02,617 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf 2023-07-21 08:15:02,620 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35/recovered.edits] 2023-07-21 08:15:02,620 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540/recovered.edits] 2023-07-21 08:15:02,620 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9/recovered.edits] 2023-07-21 08:15:02,620 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21/recovered.edits] 2023-07-21 08:15:02,620 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf/f, FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf/recovered.edits] 2023-07-21 08:15:02,628 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35/recovered.edits/4.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35/recovered.edits/4.seqid 2023-07-21 08:15:02,629 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540/recovered.edits/4.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540/recovered.edits/4.seqid 2023-07-21 08:15:02,629 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/c3138aebbdd89c5c7a334156a680df35 2023-07-21 08:15:02,630 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9/recovered.edits/4.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9/recovered.edits/4.seqid 2023-07-21 08:15:02,631 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf/recovered.edits/4.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf/recovered.edits/4.seqid 2023-07-21 08:15:02,631 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/2f751d05e9bd2f39b04cb2b6e0c09540 2023-07-21 08:15:02,631 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21/recovered.edits/4.seqid to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/archive/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21/recovered.edits/4.seqid 2023-07-21 08:15:02,631 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/d234f5ecc1220022cf9aa7fd46cfc2c9 2023-07-21 08:15:02,631 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/717cf3069431b06da6ceaed5211bdecf 2023-07-21 08:15:02,632 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/.tmp/data/default/Group_testDisabledTableMove/1b3cd39302d5dd9087d8035b91bcbc21 2023-07-21 08:15:02,632 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-21 08:15:02,634 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 08:15:02,636 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-21 08:15:02,641 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-21 08:15:02,642 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 08:15:02,642 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-21 08:15:02,643 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927302642"}]},"ts":"9223372036854775807"} 2023-07-21 08:15:02,643 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927302642"}]},"ts":"9223372036854775807"} 2023-07-21 08:15:02,643 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927302642"}]},"ts":"9223372036854775807"} 2023-07-21 08:15:02,643 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927302642"}]},"ts":"9223372036854775807"} 2023-07-21 08:15:02,643 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927302642"}]},"ts":"9223372036854775807"} 2023-07-21 08:15:02,644 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 08:15:02,645 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c3138aebbdd89c5c7a334156a680df35, NAME => 'Group_testDisabledTableMove,,1689927301651.c3138aebbdd89c5c7a334156a680df35.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 2f751d05e9bd2f39b04cb2b6e0c09540, NAME => 'Group_testDisabledTableMove,aaaaa,1689927301651.2f751d05e9bd2f39b04cb2b6e0c09540.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 717cf3069431b06da6ceaed5211bdecf, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689927301651.717cf3069431b06da6ceaed5211bdecf.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 1b3cd39302d5dd9087d8035b91bcbc21, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689927301651.1b3cd39302d5dd9087d8035b91bcbc21.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => d234f5ecc1220022cf9aa7fd46cfc2c9, NAME => 'Group_testDisabledTableMove,zzzzz,1689927301651.d234f5ecc1220022cf9aa7fd46cfc2c9.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 08:15:02,645 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-21 08:15:02,645 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689927302645"}]},"ts":"9223372036854775807"} 2023-07-21 08:15:02,646 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-21 08:15:02,648 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 08:15:02,649 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 46 msec 2023-07-21 08:15:02,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-21 08:15:02,715 INFO [Listener at localhost/43961] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-21 08:15:02,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:02,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:02,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:02,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:02,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:02,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:02,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:02,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:02,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:02,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1498439959 2023-07-21 08:15:02,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 08:15:02,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:02,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:02,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:02,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:02,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:37025] to rsgroup default 2023-07-21 08:15:02,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:02,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1498439959 2023-07-21 08:15:02,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:02,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1498439959, current retry=0 2023-07-21 08:15:02,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase5.apache.org,37025,1689927277157, jenkins-hbase5.apache.org,38059,1689927281154] are moved back to Group_testDisabledTableMove_1498439959 2023-07-21 08:15:02,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1498439959 => default 2023-07-21 08:15:02,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:02,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup Group_testDisabledTableMove_1498439959 2023-07-21 08:15:02,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:02,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:02,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:02,738 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:02,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:02,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:02,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:02,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:02,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:02,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:02,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:02,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:15:02,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:02,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 955 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928502750, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:15:02,750 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:02,752 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:02,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:02,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:02,753 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:02,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:02,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:02,774 INFO [Listener at localhost/43961] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=516 (was 512) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-186777347_17 at /127.0.0.1:57982 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a2c0b37-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2015994353_17 at /127.0.0.1:47668 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=807 (was 784) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=517 (was 509) - SystemLoadAverage LEAK? -, ProcessCount=166 (was 166), AvailableMemoryMB=2667 (was 2774) 2023-07-21 08:15:02,777 WARN [Listener at localhost/43961] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-21 08:15:02,795 INFO [Listener at localhost/43961] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=516, OpenFileDescriptor=807, MaxFileDescriptor=60000, SystemLoadAverage=517, ProcessCount=166, AvailableMemoryMB=2667 2023-07-21 08:15:02,795 WARN [Listener at localhost/43961] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-21 08:15:02,795 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-21 08:15:02,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:02,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:02,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:02,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:02,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:02,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:02,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:02,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:02,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:02,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:02,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:02,810 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:02,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:02,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:02,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:02,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:02,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:02,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:02,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:02,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:46585] to rsgroup master 2023-07-21 08:15:02,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:02,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] ipc.CallRunner(144): callId: 983 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:57944 deadline: 1689928502830, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. 2023-07-21 08:15:02,830 WARN [Listener at localhost/43961] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:46585 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:02,832 INFO [Listener at localhost/43961] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:02,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:02,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:02,833 INFO [Listener at localhost/43961] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:37025, jenkins-hbase5.apache.org:38059, jenkins-hbase5.apache.org:40169, jenkins-hbase5.apache.org:40889], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:02,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:02,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46585] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:02,834 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 08:15:02,834 INFO [Listener at localhost/43961] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 08:15:02,834 DEBUG [Listener at localhost/43961] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x702c0ae8 to 127.0.0.1:59404 2023-07-21 08:15:02,834 DEBUG [Listener at localhost/43961] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:02,835 DEBUG [Listener at localhost/43961] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 08:15:02,836 DEBUG [Listener at localhost/43961] util.JVMClusterUtil(257): Found active master hash=289641751, stopped=false 2023-07-21 08:15:02,836 DEBUG [Listener at localhost/43961] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 08:15:02,836 DEBUG [Listener at localhost/43961] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 08:15:02,836 INFO [Listener at localhost/43961] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase5.apache.org,46585,1689927275104 2023-07-21 08:15:02,838 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:02,838 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:02,838 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:02,838 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:02,838 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:02,839 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:02,839 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:02,839 INFO [Listener at localhost/43961] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 08:15:02,839 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:02,839 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:02,839 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:02,839 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:02,840 DEBUG [Listener at localhost/43961] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x21869011 to 127.0.0.1:59404 2023-07-21 08:15:02,840 DEBUG [Listener at localhost/43961] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:02,840 INFO [Listener at localhost/43961] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,40889,1689927276956' ***** 2023-07-21 08:15:02,840 INFO [Listener at localhost/43961] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 08:15:02,840 INFO [Listener at localhost/43961] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,37025,1689927277157' ***** 2023-07-21 08:15:02,840 INFO [Listener at localhost/43961] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 08:15:02,840 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:02,840 INFO [Listener at localhost/43961] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,40169,1689927277346' ***** 2023-07-21 08:15:02,841 INFO [Listener at localhost/43961] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 08:15:02,840 INFO [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:02,841 INFO [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:02,841 INFO [Listener at localhost/43961] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,38059,1689927281154' ***** 2023-07-21 08:15:02,843 INFO [Listener at localhost/43961] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 08:15:02,844 INFO [RS:3;jenkins-hbase5:38059] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:02,858 INFO [RS:2;jenkins-hbase5:40169] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7cc51cf3{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:02,858 INFO [RS:1;jenkins-hbase5:37025] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1b56eac1{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:02,858 INFO [RS:0;jenkins-hbase5:40889] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6a9e2012{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:02,858 INFO [RS:3;jenkins-hbase5:38059] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1a482f9e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:02,863 INFO [RS:1;jenkins-hbase5:37025] server.AbstractConnector(383): Stopped ServerConnector@5d0a3d54{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:02,863 INFO [RS:0;jenkins-hbase5:40889] server.AbstractConnector(383): Stopped ServerConnector@69156046{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:02,863 INFO [RS:2;jenkins-hbase5:40169] server.AbstractConnector(383): Stopped ServerConnector@3023e605{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:02,863 INFO [RS:0;jenkins-hbase5:40889] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:02,863 INFO [RS:2;jenkins-hbase5:40169] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:02,863 INFO [RS:1;jenkins-hbase5:37025] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:02,863 INFO [RS:3;jenkins-hbase5:38059] server.AbstractConnector(383): Stopped ServerConnector@614a5820{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:02,864 INFO [RS:3;jenkins-hbase5:38059] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:02,864 INFO [RS:2;jenkins-hbase5:40169] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2855a58d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:02,865 INFO [RS:0;jenkins-hbase5:40889] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@53b9762b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:02,865 INFO [RS:2;jenkins-hbase5:40169] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f20ff62{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:02,864 INFO [RS:1;jenkins-hbase5:37025] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@33446a5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:02,866 INFO [RS:0;jenkins-hbase5:40889] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2ffb745f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:02,865 INFO [RS:3;jenkins-hbase5:38059] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41ed43db{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:02,867 INFO [RS:1;jenkins-hbase5:37025] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@19459c3b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:02,867 INFO [RS:3;jenkins-hbase5:38059] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@537ec0a8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:02,869 INFO [RS:3;jenkins-hbase5:38059] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 08:15:02,869 INFO [RS:2;jenkins-hbase5:40169] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 08:15:02,870 INFO [RS:3;jenkins-hbase5:38059] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 08:15:02,870 INFO [RS:0;jenkins-hbase5:40889] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 08:15:02,870 INFO [RS:2;jenkins-hbase5:40169] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 08:15:02,870 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 08:15:02,870 INFO [RS:1;jenkins-hbase5:37025] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 08:15:02,870 INFO [RS:3;jenkins-hbase5:38059] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 08:15:02,870 INFO [RS:1;jenkins-hbase5:37025] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 08:15:02,870 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 08:15:02,870 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 08:15:02,870 INFO [RS:3;jenkins-hbase5:38059] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:15:02,870 INFO [RS:1;jenkins-hbase5:37025] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 08:15:02,870 DEBUG [RS:3;jenkins-hbase5:38059] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0b2160ba to 127.0.0.1:59404 2023-07-21 08:15:02,870 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 08:15:02,871 DEBUG [RS:3;jenkins-hbase5:38059] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:02,870 INFO [RS:2;jenkins-hbase5:40169] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 08:15:02,871 INFO [RS:3;jenkins-hbase5:38059] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,38059,1689927281154; all regions closed. 2023-07-21 08:15:02,870 INFO [RS:0;jenkins-hbase5:40889] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 08:15:02,871 INFO [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(3305): Received CLOSE for 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:15:02,871 INFO [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:15:02,871 DEBUG [RS:2;jenkins-hbase5:40169] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x34a1cbb3 to 127.0.0.1:59404 2023-07-21 08:15:02,871 DEBUG [RS:2;jenkins-hbase5:40169] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:02,871 INFO [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 08:15:02,872 DEBUG [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1478): Online Regions={334ef3e6b2ee23b07963f9cbcdefd1e0=testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0.} 2023-07-21 08:15:02,870 INFO [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:15:02,873 DEBUG [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1504): Waiting on 334ef3e6b2ee23b07963f9cbcdefd1e0 2023-07-21 08:15:02,872 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 334ef3e6b2ee23b07963f9cbcdefd1e0, disabling compactions & flushes 2023-07-21 08:15:02,871 INFO [RS:0;jenkins-hbase5:40889] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 08:15:02,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:15:02,873 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(3305): Received CLOSE for 569f7e45dedb500f02cd8d4eaf3e648d 2023-07-21 08:15:02,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:15:02,873 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(3305): Received CLOSE for 39a5cd412e322750126f98dab14c6667 2023-07-21 08:15:02,874 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 569f7e45dedb500f02cd8d4eaf3e648d, disabling compactions & flushes 2023-07-21 08:15:02,873 DEBUG [RS:1;jenkins-hbase5:37025] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x22788eee to 127.0.0.1:59404 2023-07-21 08:15:02,874 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:15:02,874 DEBUG [RS:1;jenkins-hbase5:37025] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:02,874 INFO [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,37025,1689927277157; all regions closed. 2023-07-21 08:15:02,874 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(3305): Received CLOSE for 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:15:02,874 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. after waiting 1 ms 2023-07-21 08:15:02,874 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:15:02,874 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:15:02,874 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:15:02,874 DEBUG [RS:0;jenkins-hbase5:40889] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x28021cd8 to 127.0.0.1:59404 2023-07-21 08:15:02,874 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. after waiting 0 ms 2023-07-21 08:15:02,874 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:15:02,875 DEBUG [RS:0;jenkins-hbase5:40889] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:02,875 INFO [RS:0;jenkins-hbase5:40889] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 08:15:02,875 INFO [RS:0;jenkins-hbase5:40889] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 08:15:02,875 INFO [RS:0;jenkins-hbase5:40889] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 08:15:02,875 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 08:15:02,891 INFO [regionserver/jenkins-hbase5:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:02,891 INFO [regionserver/jenkins-hbase5:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:02,891 INFO [regionserver/jenkins-hbase5:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:02,892 INFO [regionserver/jenkins-hbase5:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:02,904 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-21 08:15:02,904 DEBUG [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1478): Online Regions={569f7e45dedb500f02cd8d4eaf3e648d=hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d., 39a5cd412e322750126f98dab14c6667=unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667., 60b7870db4b1a6e4be10ee407b45c718=hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718., 1588230740=hbase:meta,,1.1588230740} 2023-07-21 08:15:02,904 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 08:15:02,905 DEBUG [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1504): Waiting on 1588230740, 39a5cd412e322750126f98dab14c6667, 569f7e45dedb500f02cd8d4eaf3e648d, 60b7870db4b1a6e4be10ee407b45c718 2023-07-21 08:15:02,905 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 08:15:02,905 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 08:15:02,905 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 08:15:02,905 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 08:15:02,906 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=36.31 KB heapSize=59.22 KB 2023-07-21 08:15:02,914 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/testRename/334ef3e6b2ee23b07963f9cbcdefd1e0/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 08:15:02,915 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:15:02,915 DEBUG [RS:3;jenkins-hbase5:38059] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/oldWALs 2023-07-21 08:15:02,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 334ef3e6b2ee23b07963f9cbcdefd1e0: 2023-07-21 08:15:02,915 INFO [RS:3;jenkins-hbase5:38059] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C38059%2C1689927281154:(num 1689927281622) 2023-07-21 08:15:02,915 DEBUG [RS:3;jenkins-hbase5:38059] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:02,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689927296039.334ef3e6b2ee23b07963f9cbcdefd1e0. 2023-07-21 08:15:02,915 INFO [RS:3;jenkins-hbase5:38059] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:02,916 INFO [RS:3;jenkins-hbase5:38059] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase5:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 08:15:02,916 INFO [RS:3;jenkins-hbase5:38059] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 08:15:02,916 INFO [RS:3;jenkins-hbase5:38059] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 08:15:02,916 INFO [RS:3;jenkins-hbase5:38059] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 08:15:02,916 INFO [regionserver/jenkins-hbase5:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:02,919 INFO [RS:3;jenkins-hbase5:38059] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:38059 2023-07-21 08:15:02,923 DEBUG [RS:1;jenkins-hbase5:37025] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/oldWALs 2023-07-21 08:15:02,923 INFO [RS:1;jenkins-hbase5:37025] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C37025%2C1689927277157:(num 1689927279547) 2023-07-21 08:15:02,923 DEBUG [RS:1;jenkins-hbase5:37025] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:02,923 INFO [RS:1;jenkins-hbase5:37025] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:02,924 INFO [RS:1;jenkins-hbase5:37025] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase5:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 08:15:02,925 INFO [RS:1;jenkins-hbase5:37025] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 08:15:02,925 INFO [regionserver/jenkins-hbase5:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:02,925 INFO [RS:1;jenkins-hbase5:37025] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 08:15:02,925 INFO [RS:1;jenkins-hbase5:37025] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 08:15:02,926 INFO [RS:1;jenkins-hbase5:37025] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:37025 2023-07-21 08:15:02,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/namespace/569f7e45dedb500f02cd8d4eaf3e648d/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-21 08:15:02,936 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:15:02,936 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 569f7e45dedb500f02cd8d4eaf3e648d: 2023-07-21 08:15:02,936 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689927280113.569f7e45dedb500f02cd8d4eaf3e648d. 2023-07-21 08:15:02,937 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 39a5cd412e322750126f98dab14c6667, disabling compactions & flushes 2023-07-21 08:15:02,937 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:15:02,937 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:15:02,937 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. after waiting 0 ms 2023-07-21 08:15:02,937 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:15:02,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/default/unmovedTable/39a5cd412e322750126f98dab14c6667/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 08:15:02,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:15:02,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 39a5cd412e322750126f98dab14c6667: 2023-07-21 08:15:02,950 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689927297701.39a5cd412e322750126f98dab14c6667. 2023-07-21 08:15:02,950 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 60b7870db4b1a6e4be10ee407b45c718, disabling compactions & flushes 2023-07-21 08:15:02,950 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:15:02,950 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:15:02,950 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. after waiting 0 ms 2023-07-21 08:15:02,950 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:15:02,950 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2745): Flushing 60b7870db4b1a6e4be10ee407b45c718 1/1 column families, dataSize=27.15 KB heapSize=44.61 KB 2023-07-21 08:15:02,953 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=33.39 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/.tmp/info/5e905747887543888c921e189fcc0b0a 2023-07-21 08:15:02,962 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5e905747887543888c921e189fcc0b0a 2023-07-21 08:15:02,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.15 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/.tmp/m/e96d0931951c4c59a540491a8fab9417 2023-07-21 08:15:02,977 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:15:02,977 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:02,977 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:15:02,977 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:02,977 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:02,978 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:15:02,978 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:15:02,978 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:15:02,978 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:02,978 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:15:02,978 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,38059,1689927281154 2023-07-21 08:15:02,978 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:02,978 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,37025,1689927277157 2023-07-21 08:15:02,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e96d0931951c4c59a540491a8fab9417 2023-07-21 08:15:02,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/.tmp/m/e96d0931951c4c59a540491a8fab9417 as hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/m/e96d0931951c4c59a540491a8fab9417 2023-07-21 08:15:02,980 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/.tmp/rep_barrier/37fe082fa5a74888addf664dd610fc69 2023-07-21 08:15:02,986 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 37fe082fa5a74888addf664dd610fc69 2023-07-21 08:15:02,987 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e96d0931951c4c59a540491a8fab9417 2023-07-21 08:15:02,987 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/m/e96d0931951c4c59a540491a8fab9417, entries=28, sequenceid=101, filesize=6.1 K 2023-07-21 08:15:02,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.15 KB/27799, heapSize ~44.59 KB/45664, currentSize=0 B/0 for 60b7870db4b1a6e4be10ee407b45c718 in 38ms, sequenceid=101, compaction requested=false 2023-07-21 08:15:03,002 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/rsgroup/60b7870db4b1a6e4be10ee407b45c718/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-21 08:15:03,003 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 08:15:03,003 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:15:03,003 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 60b7870db4b1a6e4be10ee407b45c718: 2023-07-21 08:15:03,003 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689927280223.60b7870db4b1a6e4be10ee407b45c718. 2023-07-21 08:15:03,007 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/.tmp/table/d8d6cdb01cf04cc78c4035c80e68cdf4 2023-07-21 08:15:03,012 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d8d6cdb01cf04cc78c4035c80e68cdf4 2023-07-21 08:15:03,013 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/.tmp/info/5e905747887543888c921e189fcc0b0a as hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/info/5e905747887543888c921e189fcc0b0a 2023-07-21 08:15:03,019 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5e905747887543888c921e189fcc0b0a 2023-07-21 08:15:03,019 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/info/5e905747887543888c921e189fcc0b0a, entries=52, sequenceid=210, filesize=10.7 K 2023-07-21 08:15:03,020 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/.tmp/rep_barrier/37fe082fa5a74888addf664dd610fc69 as hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/rep_barrier/37fe082fa5a74888addf664dd610fc69 2023-07-21 08:15:03,025 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 37fe082fa5a74888addf664dd610fc69 2023-07-21 08:15:03,025 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/rep_barrier/37fe082fa5a74888addf664dd610fc69, entries=8, sequenceid=210, filesize=5.8 K 2023-07-21 08:15:03,026 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/.tmp/table/d8d6cdb01cf04cc78c4035c80e68cdf4 as hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/table/d8d6cdb01cf04cc78c4035c80e68cdf4 2023-07-21 08:15:03,031 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d8d6cdb01cf04cc78c4035c80e68cdf4 2023-07-21 08:15:03,032 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/table/d8d6cdb01cf04cc78c4035c80e68cdf4, entries=16, sequenceid=210, filesize=6.0 K 2023-07-21 08:15:03,033 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~36.31 KB/37186, heapSize ~59.17 KB/60592, currentSize=0 B/0 for 1588230740 in 128ms, sequenceid=210, compaction requested=false 2023-07-21 08:15:03,045 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=101 2023-07-21 08:15:03,046 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 08:15:03,047 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 08:15:03,047 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 08:15:03,047 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 08:15:03,073 INFO [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,40169,1689927277346; all regions closed. 2023-07-21 08:15:03,079 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase5.apache.org,37025,1689927277157] 2023-07-21 08:15:03,079 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase5.apache.org,37025,1689927277157; numProcessing=1 2023-07-21 08:15:03,080 DEBUG [RS:2;jenkins-hbase5:40169] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/oldWALs 2023-07-21 08:15:03,080 INFO [RS:2;jenkins-hbase5:40169] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C40169%2C1689927277346.meta:.meta(num 1689927279821) 2023-07-21 08:15:03,081 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase5.apache.org,37025,1689927277157 already deleted, retry=false 2023-07-21 08:15:03,081 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase5.apache.org,37025,1689927277157 expired; onlineServers=3 2023-07-21 08:15:03,081 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase5.apache.org,38059,1689927281154] 2023-07-21 08:15:03,081 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase5.apache.org,38059,1689927281154; numProcessing=2 2023-07-21 08:15:03,082 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase5.apache.org,38059,1689927281154 already deleted, retry=false 2023-07-21 08:15:03,082 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase5.apache.org,38059,1689927281154 expired; onlineServers=2 2023-07-21 08:15:03,085 DEBUG [RS:2;jenkins-hbase5:40169] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/oldWALs 2023-07-21 08:15:03,085 INFO [RS:2;jenkins-hbase5:40169] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C40169%2C1689927277346:(num 1689927279581) 2023-07-21 08:15:03,085 DEBUG [RS:2;jenkins-hbase5:40169] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:03,085 INFO [RS:2;jenkins-hbase5:40169] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:03,086 INFO [RS:2;jenkins-hbase5:40169] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase5:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 08:15:03,086 INFO [RS:2;jenkins-hbase5:40169] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 08:15:03,086 INFO [regionserver/jenkins-hbase5:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:03,086 INFO [RS:2;jenkins-hbase5:40169] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 08:15:03,086 INFO [RS:2;jenkins-hbase5:40169] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 08:15:03,086 INFO [RS:2;jenkins-hbase5:40169] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:40169 2023-07-21 08:15:03,090 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:15:03,090 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,40169,1689927277346 2023-07-21 08:15:03,092 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:03,092 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase5.apache.org,40169,1689927277346] 2023-07-21 08:15:03,092 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase5.apache.org,40169,1689927277346; numProcessing=3 2023-07-21 08:15:03,093 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase5.apache.org,40169,1689927277346 already deleted, retry=false 2023-07-21 08:15:03,094 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase5.apache.org,40169,1689927277346 expired; onlineServers=1 2023-07-21 08:15:03,105 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,40889,1689927276956; all regions closed. 2023-07-21 08:15:03,108 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,40889,1689927276956/jenkins-hbase5.apache.org%2C40889%2C1689927276956.meta.1689927288408.meta not finished, retry = 0 2023-07-21 08:15:03,211 DEBUG [RS:0;jenkins-hbase5:40889] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/oldWALs 2023-07-21 08:15:03,211 INFO [RS:0;jenkins-hbase5:40889] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C40889%2C1689927276956.meta:.meta(num 1689927288408) 2023-07-21 08:15:03,215 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/WALs/jenkins-hbase5.apache.org,40889,1689927276956/jenkins-hbase5.apache.org%2C40889%2C1689927276956.1689927279548 not finished, retry = 0 2023-07-21 08:15:03,239 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:03,239 INFO [RS:2;jenkins-hbase5:40169] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,40169,1689927277346; zookeeper connection closed. 2023-07-21 08:15:03,239 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40169-0x101f28e99290003, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:03,239 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@631bbc81] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@631bbc81 2023-07-21 08:15:03,317 DEBUG [RS:0;jenkins-hbase5:40889] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/oldWALs 2023-07-21 08:15:03,317 INFO [RS:0;jenkins-hbase5:40889] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C40889%2C1689927276956:(num 1689927279548) 2023-07-21 08:15:03,317 DEBUG [RS:0;jenkins-hbase5:40889] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:03,317 INFO [RS:0;jenkins-hbase5:40889] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:03,317 INFO [RS:0;jenkins-hbase5:40889] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase5:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 08:15:03,318 INFO [regionserver/jenkins-hbase5:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:03,318 INFO [RS:0;jenkins-hbase5:40889] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:40889 2023-07-21 08:15:03,320 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,40889,1689927276956 2023-07-21 08:15:03,320 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:03,323 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase5.apache.org,40889,1689927276956] 2023-07-21 08:15:03,323 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase5.apache.org,40889,1689927276956; numProcessing=4 2023-07-21 08:15:03,324 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase5.apache.org,40889,1689927276956 already deleted, retry=false 2023-07-21 08:15:03,324 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase5.apache.org,40889,1689927276956 expired; onlineServers=0 2023-07-21 08:15:03,324 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,46585,1689927275104' ***** 2023-07-21 08:15:03,324 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 08:15:03,325 DEBUG [M:0;jenkins-hbase5:46585] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@507e9f2c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:15:03,325 INFO [M:0;jenkins-hbase5:46585] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:03,327 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 08:15:03,327 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:03,327 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:15:03,327 INFO [M:0;jenkins-hbase5:46585] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@9ca6b1f{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 08:15:03,328 INFO [M:0;jenkins-hbase5:46585] server.AbstractConnector(383): Stopped ServerConnector@668fa014{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:03,328 INFO [M:0;jenkins-hbase5:46585] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:03,328 INFO [M:0;jenkins-hbase5:46585] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5310d071{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:03,329 INFO [M:0;jenkins-hbase5:46585] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@966e0ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:03,329 INFO [M:0;jenkins-hbase5:46585] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,46585,1689927275104 2023-07-21 08:15:03,329 INFO [M:0;jenkins-hbase5:46585] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,46585,1689927275104; all regions closed. 2023-07-21 08:15:03,329 DEBUG [M:0;jenkins-hbase5:46585] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:03,329 INFO [M:0;jenkins-hbase5:46585] master.HMaster(1491): Stopping master jetty server 2023-07-21 08:15:03,330 INFO [M:0;jenkins-hbase5:46585] server.AbstractConnector(383): Stopped ServerConnector@534dab38{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:03,330 DEBUG [M:0;jenkins-hbase5:46585] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 08:15:03,330 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 08:15:03,331 DEBUG [M:0;jenkins-hbase5:46585] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 08:15:03,331 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.small.0-1689927279103] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.small.0-1689927279103,5,FailOnTimeoutGroup] 2023-07-21 08:15:03,331 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.large.0-1689927279103] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.large.0-1689927279103,5,FailOnTimeoutGroup] 2023-07-21 08:15:03,331 INFO [M:0;jenkins-hbase5:46585] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 08:15:03,331 INFO [M:0;jenkins-hbase5:46585] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 08:15:03,331 INFO [M:0;jenkins-hbase5:46585] hbase.ChoreService(369): Chore service for: master/jenkins-hbase5:0 had [] on shutdown 2023-07-21 08:15:03,331 DEBUG [M:0;jenkins-hbase5:46585] master.HMaster(1512): Stopping service threads 2023-07-21 08:15:03,331 INFO [M:0;jenkins-hbase5:46585] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 08:15:03,331 ERROR [M:0;jenkins-hbase5:46585] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-21 08:15:03,332 INFO [M:0;jenkins-hbase5:46585] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 08:15:03,332 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 08:15:03,333 DEBUG [M:0;jenkins-hbase5:46585] zookeeper.ZKUtil(398): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 08:15:03,333 WARN [M:0;jenkins-hbase5:46585] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 08:15:03,333 INFO [M:0;jenkins-hbase5:46585] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 08:15:03,333 INFO [M:0;jenkins-hbase5:46585] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 08:15:03,333 DEBUG [M:0;jenkins-hbase5:46585] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 08:15:03,333 INFO [M:0;jenkins-hbase5:46585] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:03,333 DEBUG [M:0;jenkins-hbase5:46585] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:03,333 DEBUG [M:0;jenkins-hbase5:46585] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 08:15:03,333 DEBUG [M:0;jenkins-hbase5:46585] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:03,333 INFO [M:0;jenkins-hbase5:46585] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.16 KB heapSize=621.25 KB 2023-07-21 08:15:03,339 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:03,339 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:38059-0x101f28e9929000b, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:03,339 INFO [RS:3;jenkins-hbase5:38059] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,38059,1689927281154; zookeeper connection closed. 2023-07-21 08:15:03,339 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4b67ae1c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4b67ae1c 2023-07-21 08:15:03,347 INFO [M:0;jenkins-hbase5:46585] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.16 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/52ad90e27f674ae7840779c787c93ba6 2023-07-21 08:15:03,352 DEBUG [M:0;jenkins-hbase5:46585] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/52ad90e27f674ae7840779c787c93ba6 as hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/52ad90e27f674ae7840779c787c93ba6 2023-07-21 08:15:03,357 INFO [M:0;jenkins-hbase5:46585] regionserver.HStore(1080): Added hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/52ad90e27f674ae7840779c787c93ba6, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-21 08:15:03,358 INFO [M:0;jenkins-hbase5:46585] regionserver.HRegion(2948): Finished flush of dataSize ~519.16 KB/531621, heapSize ~621.23 KB/636144, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=1152, compaction requested=false 2023-07-21 08:15:03,359 INFO [M:0;jenkins-hbase5:46585] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:03,360 DEBUG [M:0;jenkins-hbase5:46585] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 08:15:03,363 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:03,363 INFO [M:0;jenkins-hbase5:46585] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 08:15:03,364 INFO [M:0;jenkins-hbase5:46585] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:46585 2023-07-21 08:15:03,365 DEBUG [M:0;jenkins-hbase5:46585] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase5.apache.org,46585,1689927275104 already deleted, retry=false 2023-07-21 08:15:03,539 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:03,540 INFO [RS:1;jenkins-hbase5:37025] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,37025,1689927277157; zookeeper connection closed. 2023-07-21 08:15:03,540 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:37025-0x101f28e99290002, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:03,540 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6d317be5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6d317be5 2023-07-21 08:15:03,640 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:03,640 INFO [M:0;jenkins-hbase5:46585] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,46585,1689927275104; zookeeper connection closed. 2023-07-21 08:15:03,640 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): master:46585-0x101f28e99290000, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:03,740 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:03,740 INFO [RS:0;jenkins-hbase5:40889] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,40889,1689927276956; zookeeper connection closed. 2023-07-21 08:15:03,740 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): regionserver:40889-0x101f28e99290001, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:03,740 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2280e4ef] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2280e4ef 2023-07-21 08:15:03,741 INFO [Listener at localhost/43961] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-21 08:15:03,741 WARN [Listener at localhost/43961] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 08:15:03,744 INFO [Listener at localhost/43961] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 08:15:03,848 WARN [BP-1462393125-172.31.10.131-1689927271631 heartbeating to localhost/127.0.0.1:40383] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 08:15:03,848 WARN [BP-1462393125-172.31.10.131-1689927271631 heartbeating to localhost/127.0.0.1:40383] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1462393125-172.31.10.131-1689927271631 (Datanode Uuid 40983784-f99d-46a1-b27c-40ed8e83e242) service to localhost/127.0.0.1:40383 2023-07-21 08:15:03,849 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5/dfs/data/data5/current/BP-1462393125-172.31.10.131-1689927271631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:03,850 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5/dfs/data/data6/current/BP-1462393125-172.31.10.131-1689927271631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:03,851 WARN [Listener at localhost/43961] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 08:15:03,854 INFO [Listener at localhost/43961] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 08:15:03,908 WARN [BP-1462393125-172.31.10.131-1689927271631 heartbeating to localhost/127.0.0.1:40383] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1462393125-172.31.10.131-1689927271631 (Datanode Uuid 3e9c857b-bfa6-4261-9d27-1da317b47ae6) service to localhost/127.0.0.1:40383 2023-07-21 08:15:03,909 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5/dfs/data/data3/current/BP-1462393125-172.31.10.131-1689927271631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:03,909 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5/dfs/data/data4/current/BP-1462393125-172.31.10.131-1689927271631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:03,961 WARN [Listener at localhost/43961] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 08:15:03,969 INFO [Listener at localhost/43961] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 08:15:04,071 WARN [BP-1462393125-172.31.10.131-1689927271631 heartbeating to localhost/127.0.0.1:40383] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 08:15:04,072 WARN [BP-1462393125-172.31.10.131-1689927271631 heartbeating to localhost/127.0.0.1:40383] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1462393125-172.31.10.131-1689927271631 (Datanode Uuid 436b67bb-3846-4788-ade9-6e39b308acdd) service to localhost/127.0.0.1:40383 2023-07-21 08:15:04,072 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5/dfs/data/data1/current/BP-1462393125-172.31.10.131-1689927271631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:04,073 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/cluster_ef3d24ac-6ee9-b3da-4b12-98bc9887e9f5/dfs/data/data2/current/BP-1462393125-172.31.10.131-1689927271631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:04,103 INFO [Listener at localhost/43961] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 08:15:04,223 INFO [Listener at localhost/43961] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 08:15:04,288 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-21 08:15:04,288 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 08:15:04,288 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.log.dir so I do NOT create it in target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc 2023-07-21 08:15:04,288 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/71dc30e8-4c5c-01cc-745c-2ea7a6b45c2a/hadoop.tmp.dir so I do NOT create it in target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc 2023-07-21 08:15:04,288 INFO [Listener at localhost/43961] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/cluster_21653f15-198a-5eb8-1a7e-fefc7deb1383, deleteOnExit=true 2023-07-21 08:15:04,288 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 08:15:04,289 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/test.cache.data in system properties and HBase conf 2023-07-21 08:15:04,289 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 08:15:04,289 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/hadoop.log.dir in system properties and HBase conf 2023-07-21 08:15:04,289 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 08:15:04,289 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 08:15:04,289 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 08:15:04,289 DEBUG [Listener at localhost/43961] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 08:15:04,290 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 08:15:04,290 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 08:15:04,290 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 08:15:04,290 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 08:15:04,290 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 08:15:04,290 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 08:15:04,290 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 08:15:04,290 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 08:15:04,290 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 08:15:04,290 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/nfs.dump.dir in system properties and HBase conf 2023-07-21 08:15:04,291 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/java.io.tmpdir in system properties and HBase conf 2023-07-21 08:15:04,291 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 08:15:04,291 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 08:15:04,291 INFO [Listener at localhost/43961] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 08:15:04,295 WARN [Listener at localhost/43961] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 08:15:04,295 WARN [Listener at localhost/43961] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 08:15:04,319 DEBUG [Listener at localhost/43961-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101f28e9929000a, quorum=127.0.0.1:59404, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 08:15:04,319 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101f28e9929000a, quorum=127.0.0.1:59404, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 08:15:04,353 WARN [Listener at localhost/43961] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 08:15:04,355 INFO [Listener at localhost/43961] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 08:15:04,360 INFO [Listener at localhost/43961] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/java.io.tmpdir/Jetty_localhost_36265_hdfs____.o4adk8/webapp 2023-07-21 08:15:04,463 INFO [Listener at localhost/43961] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36265 2023-07-21 08:15:04,472 WARN [Listener at localhost/43961] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 08:15:04,472 WARN [Listener at localhost/43961] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 08:15:04,569 WARN [Listener at localhost/41921] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 08:15:04,586 WARN [Listener at localhost/41921] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 08:15:04,591 WARN [Listener at localhost/41921] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 08:15:04,592 INFO [Listener at localhost/41921] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 08:15:04,597 INFO [Listener at localhost/41921] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/java.io.tmpdir/Jetty_localhost_43317_datanode____ka17kx/webapp 2023-07-21 08:15:04,726 INFO [Listener at localhost/41921] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43317 2023-07-21 08:15:04,737 WARN [Listener at localhost/39829] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 08:15:04,777 WARN [Listener at localhost/39829] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 08:15:04,780 WARN [Listener at localhost/39829] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 08:15:04,781 INFO [Listener at localhost/39829] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 08:15:04,784 INFO [Listener at localhost/39829] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/java.io.tmpdir/Jetty_localhost_37335_datanode____s4ylp4/webapp 2023-07-21 08:15:04,899 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3bdc5ce8c46cb041: Processing first storage report for DS-9e111182-2098-4d22-9473-041d1c4dbd65 from datanode da4245b8-b425-4e24-a633-b7a65263d49b 2023-07-21 08:15:04,899 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3bdc5ce8c46cb041: from storage DS-9e111182-2098-4d22-9473-041d1c4dbd65 node DatanodeRegistration(127.0.0.1:38089, datanodeUuid=da4245b8-b425-4e24-a633-b7a65263d49b, infoPort=46511, infoSecurePort=0, ipcPort=39829, storageInfo=lv=-57;cid=testClusterID;nsid=1571661892;c=1689927304298), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:15:04,900 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3bdc5ce8c46cb041: Processing first storage report for DS-6d8cf097-ee24-43fb-80aa-622d794b0f8c from datanode da4245b8-b425-4e24-a633-b7a65263d49b 2023-07-21 08:15:04,900 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3bdc5ce8c46cb041: from storage DS-6d8cf097-ee24-43fb-80aa-622d794b0f8c node DatanodeRegistration(127.0.0.1:38089, datanodeUuid=da4245b8-b425-4e24-a633-b7a65263d49b, infoPort=46511, infoSecurePort=0, ipcPort=39829, storageInfo=lv=-57;cid=testClusterID;nsid=1571661892;c=1689927304298), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:15:04,913 INFO [Listener at localhost/39829] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37335 2023-07-21 08:15:04,936 WARN [Listener at localhost/34657] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 08:15:04,991 WARN [Listener at localhost/34657] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 08:15:05,000 WARN [Listener at localhost/34657] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 08:15:05,005 INFO [Listener at localhost/34657] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 08:15:05,008 INFO [Listener at localhost/34657] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/java.io.tmpdir/Jetty_localhost_39293_datanode____gvzl3h/webapp 2023-07-21 08:15:05,089 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbf8237f5db897d14: Processing first storage report for DS-d223bb95-ba0b-412d-9e3f-42c53a5a2925 from datanode b35d9684-610e-4e23-9a0d-dcaa90ef9ab8 2023-07-21 08:15:05,089 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbf8237f5db897d14: from storage DS-d223bb95-ba0b-412d-9e3f-42c53a5a2925 node DatanodeRegistration(127.0.0.1:39573, datanodeUuid=b35d9684-610e-4e23-9a0d-dcaa90ef9ab8, infoPort=38261, infoSecurePort=0, ipcPort=34657, storageInfo=lv=-57;cid=testClusterID;nsid=1571661892;c=1689927304298), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:15:05,089 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbf8237f5db897d14: Processing first storage report for DS-5e165d94-c3f7-4920-bfe5-ff6f389c8c45 from datanode b35d9684-610e-4e23-9a0d-dcaa90ef9ab8 2023-07-21 08:15:05,089 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbf8237f5db897d14: from storage DS-5e165d94-c3f7-4920-bfe5-ff6f389c8c45 node DatanodeRegistration(127.0.0.1:39573, datanodeUuid=b35d9684-610e-4e23-9a0d-dcaa90ef9ab8, infoPort=38261, infoSecurePort=0, ipcPort=34657, storageInfo=lv=-57;cid=testClusterID;nsid=1571661892;c=1689927304298), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:15:05,125 INFO [Listener at localhost/34657] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39293 2023-07-21 08:15:05,136 WARN [Listener at localhost/44391] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 08:15:05,265 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1c5fa1fcdaa46f45: Processing first storage report for DS-8b3f7cc9-54a2-4805-baae-279bd2184780 from datanode f83013c2-731e-430d-816f-3dae88c98489 2023-07-21 08:15:05,265 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1c5fa1fcdaa46f45: from storage DS-8b3f7cc9-54a2-4805-baae-279bd2184780 node DatanodeRegistration(127.0.0.1:33835, datanodeUuid=f83013c2-731e-430d-816f-3dae88c98489, infoPort=45299, infoSecurePort=0, ipcPort=44391, storageInfo=lv=-57;cid=testClusterID;nsid=1571661892;c=1689927304298), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:15:05,265 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1c5fa1fcdaa46f45: Processing first storage report for DS-a62f6ad9-f59b-430d-a269-81e126e8def3 from datanode f83013c2-731e-430d-816f-3dae88c98489 2023-07-21 08:15:05,265 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1c5fa1fcdaa46f45: from storage DS-a62f6ad9-f59b-430d-a269-81e126e8def3 node DatanodeRegistration(127.0.0.1:33835, datanodeUuid=f83013c2-731e-430d-816f-3dae88c98489, infoPort=45299, infoSecurePort=0, ipcPort=44391, storageInfo=lv=-57;cid=testClusterID;nsid=1571661892;c=1689927304298), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:15:05,330 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 08:15:05,330 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 08:15:05,331 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 08:15:05,364 DEBUG [Listener at localhost/44391] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc 2023-07-21 08:15:05,368 INFO [Listener at localhost/44391] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/cluster_21653f15-198a-5eb8-1a7e-fefc7deb1383/zookeeper_0, clientPort=59333, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/cluster_21653f15-198a-5eb8-1a7e-fefc7deb1383/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/cluster_21653f15-198a-5eb8-1a7e-fefc7deb1383/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 08:15:05,370 INFO [Listener at localhost/44391] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59333 2023-07-21 08:15:05,370 INFO [Listener at localhost/44391] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:05,372 INFO [Listener at localhost/44391] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:05,395 INFO [Listener at localhost/44391] util.FSUtils(471): Created version file at hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c with version=8 2023-07-21 08:15:05,395 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/hbase-staging 2023-07-21 08:15:05,396 DEBUG [Listener at localhost/44391] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 08:15:05,396 DEBUG [Listener at localhost/44391] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 08:15:05,396 DEBUG [Listener at localhost/44391] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 08:15:05,397 DEBUG [Listener at localhost/44391] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 08:15:05,397 INFO [Listener at localhost/44391] client.ConnectionUtils(127): master/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:15:05,398 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:05,398 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:05,398 INFO [Listener at localhost/44391] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:15:05,398 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:05,398 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:15:05,398 INFO [Listener at localhost/44391] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:15:05,399 INFO [Listener at localhost/44391] ipc.NettyRpcServer(120): Bind to /172.31.10.131:43777 2023-07-21 08:15:05,399 INFO [Listener at localhost/44391] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:05,400 INFO [Listener at localhost/44391] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:05,401 INFO [Listener at localhost/44391] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43777 connecting to ZooKeeper ensemble=127.0.0.1:59333 2023-07-21 08:15:05,407 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:437770x0, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:15:05,408 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43777-0x101f28f12f20000 connected 2023-07-21 08:15:05,422 DEBUG [Listener at localhost/44391] zookeeper.ZKUtil(164): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:15:05,423 DEBUG [Listener at localhost/44391] zookeeper.ZKUtil(164): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:05,423 DEBUG [Listener at localhost/44391] zookeeper.ZKUtil(164): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:15:05,424 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43777 2023-07-21 08:15:05,424 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43777 2023-07-21 08:15:05,424 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43777 2023-07-21 08:15:05,424 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43777 2023-07-21 08:15:05,425 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43777 2023-07-21 08:15:05,427 INFO [Listener at localhost/44391] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:15:05,427 INFO [Listener at localhost/44391] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:15:05,427 INFO [Listener at localhost/44391] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:15:05,427 INFO [Listener at localhost/44391] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 08:15:05,427 INFO [Listener at localhost/44391] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:15:05,427 INFO [Listener at localhost/44391] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:15:05,427 INFO [Listener at localhost/44391] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:15:05,428 INFO [Listener at localhost/44391] http.HttpServer(1146): Jetty bound to port 33243 2023-07-21 08:15:05,428 INFO [Listener at localhost/44391] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:15:05,430 INFO [Listener at localhost/44391] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:05,430 INFO [Listener at localhost/44391] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@26c901e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:15:05,430 INFO [Listener at localhost/44391] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:05,431 INFO [Listener at localhost/44391] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39bac69f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:15:05,555 INFO [Listener at localhost/44391] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:15:05,556 INFO [Listener at localhost/44391] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:15:05,556 INFO [Listener at localhost/44391] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:15:05,557 INFO [Listener at localhost/44391] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 08:15:05,557 INFO [Listener at localhost/44391] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:05,558 INFO [Listener at localhost/44391] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5fd2d473{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/java.io.tmpdir/jetty-0_0_0_0-33243-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5997428115783684988/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 08:15:05,560 INFO [Listener at localhost/44391] server.AbstractConnector(333): Started ServerConnector@2e304bdc{HTTP/1.1, (http/1.1)}{0.0.0.0:33243} 2023-07-21 08:15:05,560 INFO [Listener at localhost/44391] server.Server(415): Started @35772ms 2023-07-21 08:15:05,560 INFO [Listener at localhost/44391] master.HMaster(444): hbase.rootdir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c, hbase.cluster.distributed=false 2023-07-21 08:15:05,577 INFO [Listener at localhost/44391] client.ConnectionUtils(127): regionserver/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:15:05,577 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:05,577 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:05,577 INFO [Listener at localhost/44391] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:15:05,577 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:05,577 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:15:05,577 INFO [Listener at localhost/44391] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:15:05,578 INFO [Listener at localhost/44391] ipc.NettyRpcServer(120): Bind to /172.31.10.131:38067 2023-07-21 08:15:05,578 INFO [Listener at localhost/44391] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 08:15:05,579 DEBUG [Listener at localhost/44391] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 08:15:05,579 INFO [Listener at localhost/44391] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:05,580 INFO [Listener at localhost/44391] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:05,581 INFO [Listener at localhost/44391] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38067 connecting to ZooKeeper ensemble=127.0.0.1:59333 2023-07-21 08:15:05,585 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:380670x0, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:15:05,586 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38067-0x101f28f12f20001 connected 2023-07-21 08:15:05,586 DEBUG [Listener at localhost/44391] zookeeper.ZKUtil(164): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:15:05,586 DEBUG [Listener at localhost/44391] zookeeper.ZKUtil(164): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:05,587 DEBUG [Listener at localhost/44391] zookeeper.ZKUtil(164): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:15:05,588 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38067 2023-07-21 08:15:05,588 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38067 2023-07-21 08:15:05,588 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38067 2023-07-21 08:15:05,591 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38067 2023-07-21 08:15:05,592 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38067 2023-07-21 08:15:05,594 INFO [Listener at localhost/44391] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:15:05,594 INFO [Listener at localhost/44391] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:15:05,594 INFO [Listener at localhost/44391] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:15:05,594 INFO [Listener at localhost/44391] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 08:15:05,594 INFO [Listener at localhost/44391] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:15:05,594 INFO [Listener at localhost/44391] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:15:05,595 INFO [Listener at localhost/44391] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:15:05,595 INFO [Listener at localhost/44391] http.HttpServer(1146): Jetty bound to port 40833 2023-07-21 08:15:05,596 INFO [Listener at localhost/44391] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:15:05,599 INFO [Listener at localhost/44391] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:05,599 INFO [Listener at localhost/44391] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@731f4564{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:15:05,599 INFO [Listener at localhost/44391] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:05,600 INFO [Listener at localhost/44391] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6acb7487{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:15:05,723 INFO [Listener at localhost/44391] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:15:05,724 INFO [Listener at localhost/44391] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:15:05,724 INFO [Listener at localhost/44391] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:15:05,724 INFO [Listener at localhost/44391] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 08:15:05,725 INFO [Listener at localhost/44391] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:05,726 INFO [Listener at localhost/44391] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@111c8d49{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/java.io.tmpdir/jetty-0_0_0_0-40833-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4747960257082642720/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:05,727 INFO [Listener at localhost/44391] server.AbstractConnector(333): Started ServerConnector@3bed803e{HTTP/1.1, (http/1.1)}{0.0.0.0:40833} 2023-07-21 08:15:05,728 INFO [Listener at localhost/44391] server.Server(415): Started @35940ms 2023-07-21 08:15:05,742 INFO [Listener at localhost/44391] client.ConnectionUtils(127): regionserver/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:15:05,742 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:05,742 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:05,742 INFO [Listener at localhost/44391] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:15:05,743 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:05,743 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:15:05,743 INFO [Listener at localhost/44391] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:15:05,743 INFO [Listener at localhost/44391] ipc.NettyRpcServer(120): Bind to /172.31.10.131:40175 2023-07-21 08:15:05,744 INFO [Listener at localhost/44391] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 08:15:05,752 DEBUG [Listener at localhost/44391] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 08:15:05,753 INFO [Listener at localhost/44391] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:05,755 INFO [Listener at localhost/44391] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:05,756 INFO [Listener at localhost/44391] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40175 connecting to ZooKeeper ensemble=127.0.0.1:59333 2023-07-21 08:15:05,759 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:401750x0, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:15:05,761 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40175-0x101f28f12f20002 connected 2023-07-21 08:15:05,761 DEBUG [Listener at localhost/44391] zookeeper.ZKUtil(164): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:15:05,761 DEBUG [Listener at localhost/44391] zookeeper.ZKUtil(164): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:05,761 DEBUG [Listener at localhost/44391] zookeeper.ZKUtil(164): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:15:05,762 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40175 2023-07-21 08:15:05,762 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40175 2023-07-21 08:15:05,762 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40175 2023-07-21 08:15:05,762 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40175 2023-07-21 08:15:05,763 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40175 2023-07-21 08:15:05,764 INFO [Listener at localhost/44391] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:15:05,765 INFO [Listener at localhost/44391] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:15:05,765 INFO [Listener at localhost/44391] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:15:05,765 INFO [Listener at localhost/44391] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 08:15:05,765 INFO [Listener at localhost/44391] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:15:05,765 INFO [Listener at localhost/44391] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:15:05,765 INFO [Listener at localhost/44391] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:15:05,766 INFO [Listener at localhost/44391] http.HttpServer(1146): Jetty bound to port 42017 2023-07-21 08:15:05,766 INFO [Listener at localhost/44391] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:15:05,768 INFO [Listener at localhost/44391] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:05,768 INFO [Listener at localhost/44391] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1da00a90{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:15:05,768 INFO [Listener at localhost/44391] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:05,768 INFO [Listener at localhost/44391] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@70c264{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:15:05,884 INFO [Listener at localhost/44391] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:15:05,885 INFO [Listener at localhost/44391] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:15:05,885 INFO [Listener at localhost/44391] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:15:05,886 INFO [Listener at localhost/44391] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 08:15:05,886 INFO [Listener at localhost/44391] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:05,887 INFO [Listener at localhost/44391] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@e320849{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/java.io.tmpdir/jetty-0_0_0_0-42017-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6761085177584988167/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:05,888 INFO [Listener at localhost/44391] server.AbstractConnector(333): Started ServerConnector@1e627738{HTTP/1.1, (http/1.1)}{0.0.0.0:42017} 2023-07-21 08:15:05,888 INFO [Listener at localhost/44391] server.Server(415): Started @36101ms 2023-07-21 08:15:05,899 INFO [Listener at localhost/44391] client.ConnectionUtils(127): regionserver/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:15:05,899 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:05,899 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:05,900 INFO [Listener at localhost/44391] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:15:05,900 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:05,900 INFO [Listener at localhost/44391] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:15:05,900 INFO [Listener at localhost/44391] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:15:05,900 INFO [Listener at localhost/44391] ipc.NettyRpcServer(120): Bind to /172.31.10.131:45973 2023-07-21 08:15:05,901 INFO [Listener at localhost/44391] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 08:15:05,902 DEBUG [Listener at localhost/44391] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 08:15:05,902 INFO [Listener at localhost/44391] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:05,903 INFO [Listener at localhost/44391] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:05,904 INFO [Listener at localhost/44391] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45973 connecting to ZooKeeper ensemble=127.0.0.1:59333 2023-07-21 08:15:05,907 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:459730x0, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:15:05,908 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45973-0x101f28f12f20003 connected 2023-07-21 08:15:05,908 DEBUG [Listener at localhost/44391] zookeeper.ZKUtil(164): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:15:05,909 DEBUG [Listener at localhost/44391] zookeeper.ZKUtil(164): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:05,909 DEBUG [Listener at localhost/44391] zookeeper.ZKUtil(164): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:15:05,910 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45973 2023-07-21 08:15:05,910 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45973 2023-07-21 08:15:05,910 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45973 2023-07-21 08:15:05,911 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45973 2023-07-21 08:15:05,911 DEBUG [Listener at localhost/44391] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45973 2023-07-21 08:15:05,913 INFO [Listener at localhost/44391] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:15:05,913 INFO [Listener at localhost/44391] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:15:05,913 INFO [Listener at localhost/44391] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:15:05,914 INFO [Listener at localhost/44391] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 08:15:05,914 INFO [Listener at localhost/44391] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:15:05,914 INFO [Listener at localhost/44391] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:15:05,914 INFO [Listener at localhost/44391] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:15:05,915 INFO [Listener at localhost/44391] http.HttpServer(1146): Jetty bound to port 35251 2023-07-21 08:15:05,915 INFO [Listener at localhost/44391] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:15:05,916 INFO [Listener at localhost/44391] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:05,916 INFO [Listener at localhost/44391] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7257b488{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:15:05,917 INFO [Listener at localhost/44391] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:05,917 INFO [Listener at localhost/44391] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2ee15bbe{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:15:06,034 INFO [Listener at localhost/44391] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:15:06,035 INFO [Listener at localhost/44391] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:15:06,036 INFO [Listener at localhost/44391] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:15:06,036 INFO [Listener at localhost/44391] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 08:15:06,037 INFO [Listener at localhost/44391] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:06,038 INFO [Listener at localhost/44391] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@71504e91{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/java.io.tmpdir/jetty-0_0_0_0-35251-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7164955240096462974/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:06,039 INFO [Listener at localhost/44391] server.AbstractConnector(333): Started ServerConnector@66d60559{HTTP/1.1, (http/1.1)}{0.0.0.0:35251} 2023-07-21 08:15:06,039 INFO [Listener at localhost/44391] server.Server(415): Started @36252ms 2023-07-21 08:15:06,041 INFO [master/jenkins-hbase5:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:15:06,046 INFO [master/jenkins-hbase5:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@1b684e42{HTTP/1.1, (http/1.1)}{0.0.0.0:37137} 2023-07-21 08:15:06,046 INFO [master/jenkins-hbase5:0:becomeActiveMaster] server.Server(415): Started @36259ms 2023-07-21 08:15:06,046 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase5.apache.org,43777,1689927305397 2023-07-21 08:15:06,047 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 08:15:06,048 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase5.apache.org,43777,1689927305397 2023-07-21 08:15:06,050 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 08:15:06,050 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 08:15:06,050 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 08:15:06,050 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 08:15:06,051 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:06,052 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 08:15:06,054 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 08:15:06,054 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase5.apache.org,43777,1689927305397 from backup master directory 2023-07-21 08:15:06,055 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase5.apache.org,43777,1689927305397 2023-07-21 08:15:06,055 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 08:15:06,055 WARN [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:15:06,055 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase5.apache.org,43777,1689927305397 2023-07-21 08:15:06,072 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/hbase.id with ID: 48c07bea-ea6d-4ffc-ae2b-31a1fef46577 2023-07-21 08:15:06,076 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-21 08:15:06,103 INFO [master/jenkins-hbase5:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:06,106 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:06,121 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3b9a8c23 to 127.0.0.1:59333 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:15:06,128 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15fa6570, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:15:06,128 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:06,128 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 08:15:06,129 INFO [master/jenkins-hbase5:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:15:06,131 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/MasterData/data/master/store-tmp 2023-07-21 08:15:06,156 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:06,156 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 08:15:06,156 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:06,156 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:06,156 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 08:15:06,156 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:06,156 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:06,156 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 08:15:06,157 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/MasterData/WALs/jenkins-hbase5.apache.org,43777,1689927305397 2023-07-21 08:15:06,159 INFO [master/jenkins-hbase5:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C43777%2C1689927305397, suffix=, logDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/MasterData/WALs/jenkins-hbase5.apache.org,43777,1689927305397, archiveDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/MasterData/oldWALs, maxLogs=10 2023-07-21 08:15:06,174 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39573,DS-d223bb95-ba0b-412d-9e3f-42c53a5a2925,DISK] 2023-07-21 08:15:06,174 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33835,DS-8b3f7cc9-54a2-4805-baae-279bd2184780,DISK] 2023-07-21 08:15:06,174 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38089,DS-9e111182-2098-4d22-9473-041d1c4dbd65,DISK] 2023-07-21 08:15:06,176 INFO [master/jenkins-hbase5:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/MasterData/WALs/jenkins-hbase5.apache.org,43777,1689927305397/jenkins-hbase5.apache.org%2C43777%2C1689927305397.1689927306159 2023-07-21 08:15:06,176 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39573,DS-d223bb95-ba0b-412d-9e3f-42c53a5a2925,DISK], DatanodeInfoWithStorage[127.0.0.1:38089,DS-9e111182-2098-4d22-9473-041d1c4dbd65,DISK], DatanodeInfoWithStorage[127.0.0.1:33835,DS-8b3f7cc9-54a2-4805-baae-279bd2184780,DISK]] 2023-07-21 08:15:06,177 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:15:06,177 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:06,177 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:15:06,177 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:15:06,179 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:15:06,180 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 08:15:06,180 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 08:15:06,181 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:06,181 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:15:06,182 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:15:06,184 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:15:06,186 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:06,186 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11075288800, jitterRate=0.03146664798259735}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:06,186 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 08:15:06,187 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 08:15:06,188 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 08:15:06,188 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 08:15:06,188 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 08:15:06,188 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 08:15:06,188 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-21 08:15:06,188 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 08:15:06,189 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 08:15:06,190 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 08:15:06,191 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 08:15:06,191 INFO [master/jenkins-hbase5:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 08:15:06,191 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 08:15:06,193 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:06,193 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 08:15:06,194 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 08:15:06,195 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 08:15:06,196 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:06,196 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:06,196 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:06,196 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:06,196 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:06,196 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase5.apache.org,43777,1689927305397, sessionid=0x101f28f12f20000, setting cluster-up flag (Was=false) 2023-07-21 08:15:06,201 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:06,206 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 08:15:06,207 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase5.apache.org,43777,1689927305397 2023-07-21 08:15:06,210 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:06,215 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 08:15:06,217 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase5.apache.org,43777,1689927305397 2023-07-21 08:15:06,218 WARN [master/jenkins-hbase5:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.hbase-snapshot/.tmp 2023-07-21 08:15:06,225 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 08:15:06,226 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 08:15:06,226 INFO [master/jenkins-hbase5:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 08:15:06,227 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,43777,1689927305397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:15:06,227 INFO [master/jenkins-hbase5:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 08:15:06,228 INFO [master/jenkins-hbase5:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-21 08:15:06,229 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 08:15:06,242 INFO [master/jenkins-hbase5:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 08:15:06,242 INFO [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(951): ClusterId : 48c07bea-ea6d-4ffc-ae2b-31a1fef46577 2023-07-21 08:15:06,243 INFO [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(951): ClusterId : 48c07bea-ea6d-4ffc-ae2b-31a1fef46577 2023-07-21 08:15:06,243 INFO [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(951): ClusterId : 48c07bea-ea6d-4ffc-ae2b-31a1fef46577 2023-07-21 08:15:06,244 DEBUG [RS:0;jenkins-hbase5:38067] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 08:15:06,246 DEBUG [RS:1;jenkins-hbase5:40175] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 08:15:06,243 INFO [master/jenkins-hbase5:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 08:15:06,246 DEBUG [RS:2;jenkins-hbase5:45973] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 08:15:06,246 INFO [master/jenkins-hbase5:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 08:15:06,246 INFO [master/jenkins-hbase5:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 08:15:06,247 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase5:0, corePoolSize=5, maxPoolSize=5 2023-07-21 08:15:06,247 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase5:0, corePoolSize=5, maxPoolSize=5 2023-07-21 08:15:06,247 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=5, maxPoolSize=5 2023-07-21 08:15:06,247 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=5, maxPoolSize=5 2023-07-21 08:15:06,247 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase5:0, corePoolSize=10, maxPoolSize=10 2023-07-21 08:15:06,247 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,247 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:15:06,247 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,249 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689927336249 2023-07-21 08:15:06,250 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 08:15:06,250 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 08:15:06,250 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 08:15:06,250 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 08:15:06,250 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 08:15:06,250 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 08:15:06,252 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,252 DEBUG [RS:2;jenkins-hbase5:45973] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 08:15:06,252 DEBUG [RS:2;jenkins-hbase5:45973] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 08:15:06,252 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 08:15:06,252 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 08:15:06,252 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 08:15:06,253 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 08:15:06,253 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 08:15:06,253 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 08:15:06,253 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 08:15:06,253 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.large.0-1689927306253,5,FailOnTimeoutGroup] 2023-07-21 08:15:06,254 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.small.0-1689927306254,5,FailOnTimeoutGroup] 2023-07-21 08:15:06,254 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,254 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 08:15:06,254 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,254 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,254 DEBUG [RS:1;jenkins-hbase5:40175] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 08:15:06,254 DEBUG [RS:1;jenkins-hbase5:40175] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 08:15:06,255 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:06,256 DEBUG [RS:2;jenkins-hbase5:45973] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 08:15:06,256 DEBUG [RS:0;jenkins-hbase5:38067] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 08:15:06,256 DEBUG [RS:0;jenkins-hbase5:38067] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 08:15:06,259 DEBUG [RS:2;jenkins-hbase5:45973] zookeeper.ReadOnlyZKClient(139): Connect 0x5d7c0449 to 127.0.0.1:59333 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:15:06,260 DEBUG [RS:1;jenkins-hbase5:40175] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 08:15:06,261 DEBUG [RS:0;jenkins-hbase5:38067] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 08:15:06,264 DEBUG [RS:0;jenkins-hbase5:38067] zookeeper.ReadOnlyZKClient(139): Connect 0x3668b8b6 to 127.0.0.1:59333 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:15:06,264 DEBUG [RS:1;jenkins-hbase5:40175] zookeeper.ReadOnlyZKClient(139): Connect 0x50d0c2c7 to 127.0.0.1:59333 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:15:06,300 DEBUG [RS:0;jenkins-hbase5:38067] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38af500b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:15:06,300 DEBUG [RS:1;jenkins-hbase5:40175] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e1736ad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:15:06,301 DEBUG [RS:2;jenkins-hbase5:45973] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@57846f39, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:15:06,301 DEBUG [RS:1;jenkins-hbase5:40175] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a28a9b7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:15:06,301 DEBUG [RS:0;jenkins-hbase5:38067] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c90d3c6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:15:06,301 DEBUG [RS:2;jenkins-hbase5:45973] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1286ebad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:15:06,313 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 08:15:06,314 DEBUG [RS:0;jenkins-hbase5:38067] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase5:38067 2023-07-21 08:15:06,314 DEBUG [RS:2;jenkins-hbase5:45973] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase5:45973 2023-07-21 08:15:06,314 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 08:15:06,314 INFO [RS:2;jenkins-hbase5:45973] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 08:15:06,314 INFO [RS:2;jenkins-hbase5:45973] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 08:15:06,314 DEBUG [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 08:15:06,314 INFO [RS:0;jenkins-hbase5:38067] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 08:15:06,314 INFO [RS:0;jenkins-hbase5:38067] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 08:15:06,314 DEBUG [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 08:15:06,314 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c 2023-07-21 08:15:06,314 DEBUG [RS:1;jenkins-hbase5:40175] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase5:40175 2023-07-21 08:15:06,315 INFO [RS:1;jenkins-hbase5:40175] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 08:15:06,315 INFO [RS:1;jenkins-hbase5:40175] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 08:15:06,315 DEBUG [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 08:15:06,315 INFO [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,43777,1689927305397 with isa=jenkins-hbase5.apache.org/172.31.10.131:38067, startcode=1689927305576 2023-07-21 08:15:06,315 DEBUG [RS:0;jenkins-hbase5:38067] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 08:15:06,315 INFO [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,43777,1689927305397 with isa=jenkins-hbase5.apache.org/172.31.10.131:45973, startcode=1689927305899 2023-07-21 08:15:06,315 INFO [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,43777,1689927305397 with isa=jenkins-hbase5.apache.org/172.31.10.131:40175, startcode=1689927305741 2023-07-21 08:15:06,316 DEBUG [RS:2;jenkins-hbase5:45973] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 08:15:06,316 DEBUG [RS:1;jenkins-hbase5:40175] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 08:15:06,318 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:45237, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 08:15:06,318 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:33081, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 08:15:06,318 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:51225, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 08:15:06,320 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43777] master.ServerManager(394): Registering regionserver=jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:06,320 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,43777,1689927305397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:15:06,320 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43777] master.ServerManager(394): Registering regionserver=jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:06,321 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,43777,1689927305397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 08:15:06,321 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,43777,1689927305397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:15:06,321 DEBUG [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c 2023-07-21 08:15:06,321 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,43777,1689927305397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 08:15:06,321 DEBUG [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41921 2023-07-21 08:15:06,321 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43777] master.ServerManager(394): Registering regionserver=jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:06,321 DEBUG [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c 2023-07-21 08:15:06,321 DEBUG [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33243 2023-07-21 08:15:06,321 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,43777,1689927305397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:15:06,321 DEBUG [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41921 2023-07-21 08:15:06,321 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,43777,1689927305397] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 08:15:06,321 DEBUG [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33243 2023-07-21 08:15:06,321 DEBUG [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c 2023-07-21 08:15:06,321 DEBUG [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41921 2023-07-21 08:15:06,321 DEBUG [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33243 2023-07-21 08:15:06,322 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:06,331 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase5.apache.org,40175,1689927305741] 2023-07-21 08:15:06,331 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase5.apache.org,45973,1689927305899] 2023-07-21 08:15:06,331 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase5.apache.org,38067,1689927305576] 2023-07-21 08:15:06,331 DEBUG [RS:0;jenkins-hbase5:38067] zookeeper.ZKUtil(162): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:06,332 WARN [RS:0;jenkins-hbase5:38067] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:15:06,332 DEBUG [RS:1;jenkins-hbase5:40175] zookeeper.ZKUtil(162): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:06,332 INFO [RS:0;jenkins-hbase5:38067] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:15:06,332 WARN [RS:1;jenkins-hbase5:40175] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:15:06,332 DEBUG [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/WALs/jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:06,332 INFO [RS:1;jenkins-hbase5:40175] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:15:06,332 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:06,333 DEBUG [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/WALs/jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:06,335 DEBUG [RS:2;jenkins-hbase5:45973] zookeeper.ZKUtil(162): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:06,335 WARN [RS:2;jenkins-hbase5:45973] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:15:06,335 INFO [RS:2;jenkins-hbase5:45973] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:15:06,335 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 08:15:06,335 DEBUG [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/WALs/jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:06,337 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/info 2023-07-21 08:15:06,337 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 08:15:06,338 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:06,338 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 08:15:06,343 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/rep_barrier 2023-07-21 08:15:06,343 DEBUG [RS:0;jenkins-hbase5:38067] zookeeper.ZKUtil(162): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:06,343 DEBUG [RS:1;jenkins-hbase5:40175] zookeeper.ZKUtil(162): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:06,344 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 08:15:06,344 DEBUG [RS:1;jenkins-hbase5:40175] zookeeper.ZKUtil(162): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:06,344 DEBUG [RS:0;jenkins-hbase5:38067] zookeeper.ZKUtil(162): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:06,345 DEBUG [RS:1;jenkins-hbase5:40175] zookeeper.ZKUtil(162): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:06,345 DEBUG [RS:0;jenkins-hbase5:38067] zookeeper.ZKUtil(162): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:06,345 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:06,345 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 08:15:06,345 DEBUG [RS:2;jenkins-hbase5:45973] zookeeper.ZKUtil(162): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:06,346 DEBUG [RS:2;jenkins-hbase5:45973] zookeeper.ZKUtil(162): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:06,346 DEBUG [RS:1;jenkins-hbase5:40175] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 08:15:06,346 DEBUG [RS:0;jenkins-hbase5:38067] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 08:15:06,346 INFO [RS:1;jenkins-hbase5:40175] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 08:15:06,346 INFO [RS:0;jenkins-hbase5:38067] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 08:15:06,346 DEBUG [RS:2;jenkins-hbase5:45973] zookeeper.ZKUtil(162): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:06,346 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/table 2023-07-21 08:15:06,347 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 08:15:06,347 DEBUG [RS:2;jenkins-hbase5:45973] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 08:15:06,347 INFO [RS:1;jenkins-hbase5:40175] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 08:15:06,347 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:06,348 INFO [RS:2;jenkins-hbase5:45973] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 08:15:06,351 INFO [RS:1;jenkins-hbase5:40175] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 08:15:06,351 INFO [RS:2;jenkins-hbase5:45973] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 08:15:06,351 INFO [RS:0;jenkins-hbase5:38067] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 08:15:06,351 INFO [RS:1;jenkins-hbase5:40175] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,351 INFO [RS:2;jenkins-hbase5:45973] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 08:15:06,351 INFO [RS:0;jenkins-hbase5:38067] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 08:15:06,351 INFO [RS:2;jenkins-hbase5:45973] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,351 INFO [RS:0;jenkins-hbase5:38067] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,351 INFO [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 08:15:06,351 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740 2023-07-21 08:15:06,351 INFO [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 08:15:06,352 INFO [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 08:15:06,353 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740 2023-07-21 08:15:06,354 INFO [RS:2;jenkins-hbase5:45973] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,354 INFO [RS:0;jenkins-hbase5:38067] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,354 INFO [RS:1;jenkins-hbase5:40175] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,354 DEBUG [RS:2;jenkins-hbase5:45973] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:0;jenkins-hbase5:38067] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:2;jenkins-hbase5:45973] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:0;jenkins-hbase5:38067] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:2;jenkins-hbase5:45973] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:0;jenkins-hbase5:38067] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:2;jenkins-hbase5:45973] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:0;jenkins-hbase5:38067] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:2;jenkins-hbase5:45973] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:0;jenkins-hbase5:38067] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:2;jenkins-hbase5:45973] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:15:06,355 DEBUG [RS:0;jenkins-hbase5:38067] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:15:06,355 DEBUG [RS:2;jenkins-hbase5:45973] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:0;jenkins-hbase5:38067] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:2;jenkins-hbase5:45973] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:0;jenkins-hbase5:38067] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:2;jenkins-hbase5:45973] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:0;jenkins-hbase5:38067] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:2;jenkins-hbase5:45973] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:0;jenkins-hbase5:38067] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:1;jenkins-hbase5:40175] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,355 DEBUG [RS:1;jenkins-hbase5:40175] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,356 DEBUG [RS:1;jenkins-hbase5:40175] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,357 INFO [RS:2;jenkins-hbase5:45973] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,357 DEBUG [RS:1;jenkins-hbase5:40175] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,357 INFO [RS:2;jenkins-hbase5:45973] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,357 DEBUG [RS:1;jenkins-hbase5:40175] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,357 INFO [RS:2;jenkins-hbase5:45973] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,357 DEBUG [RS:1;jenkins-hbase5:40175] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:15:06,357 INFO [RS:2;jenkins-hbase5:45973] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,357 INFO [RS:0;jenkins-hbase5:38067] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,357 DEBUG [RS:1;jenkins-hbase5:40175] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,357 INFO [RS:0;jenkins-hbase5:38067] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,357 DEBUG [RS:1;jenkins-hbase5:40175] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,358 INFO [RS:0;jenkins-hbase5:38067] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,358 DEBUG [RS:1;jenkins-hbase5:40175] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,358 INFO [RS:0;jenkins-hbase5:38067] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,358 DEBUG [RS:1;jenkins-hbase5:40175] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:06,358 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 08:15:06,359 INFO [RS:1;jenkins-hbase5:40175] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,359 INFO [RS:1;jenkins-hbase5:40175] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,359 INFO [RS:1;jenkins-hbase5:40175] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,359 INFO [RS:1;jenkins-hbase5:40175] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,360 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 08:15:06,362 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:06,362 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10273546560, jitterRate=-0.04320141673088074}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 08:15:06,362 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 08:15:06,362 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 08:15:06,362 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 08:15:06,362 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 08:15:06,362 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 08:15:06,362 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 08:15:06,362 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 08:15:06,363 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 08:15:06,363 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 08:15:06,363 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 08:15:06,364 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 08:15:06,365 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 08:15:06,366 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 08:15:06,371 INFO [RS:2;jenkins-hbase5:45973] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 08:15:06,371 INFO [RS:0;jenkins-hbase5:38067] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 08:15:06,371 INFO [RS:2;jenkins-hbase5:45973] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,45973,1689927305899-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,371 INFO [RS:0;jenkins-hbase5:38067] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,38067,1689927305576-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,371 INFO [RS:1;jenkins-hbase5:40175] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 08:15:06,372 INFO [RS:1;jenkins-hbase5:40175] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,40175,1689927305741-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,382 INFO [RS:2;jenkins-hbase5:45973] regionserver.Replication(203): jenkins-hbase5.apache.org,45973,1689927305899 started 2023-07-21 08:15:06,382 INFO [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(1637): Serving as jenkins-hbase5.apache.org,45973,1689927305899, RpcServer on jenkins-hbase5.apache.org/172.31.10.131:45973, sessionid=0x101f28f12f20003 2023-07-21 08:15:06,382 DEBUG [RS:2;jenkins-hbase5:45973] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 08:15:06,382 DEBUG [RS:2;jenkins-hbase5:45973] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:06,382 DEBUG [RS:2;jenkins-hbase5:45973] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,45973,1689927305899' 2023-07-21 08:15:06,382 DEBUG [RS:2;jenkins-hbase5:45973] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 08:15:06,382 INFO [RS:1;jenkins-hbase5:40175] regionserver.Replication(203): jenkins-hbase5.apache.org,40175,1689927305741 started 2023-07-21 08:15:06,382 INFO [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(1637): Serving as jenkins-hbase5.apache.org,40175,1689927305741, RpcServer on jenkins-hbase5.apache.org/172.31.10.131:40175, sessionid=0x101f28f12f20002 2023-07-21 08:15:06,382 INFO [RS:0;jenkins-hbase5:38067] regionserver.Replication(203): jenkins-hbase5.apache.org,38067,1689927305576 started 2023-07-21 08:15:06,382 DEBUG [RS:1;jenkins-hbase5:40175] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 08:15:06,382 DEBUG [RS:1;jenkins-hbase5:40175] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:06,382 DEBUG [RS:1;jenkins-hbase5:40175] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,40175,1689927305741' 2023-07-21 08:15:06,382 DEBUG [RS:1;jenkins-hbase5:40175] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 08:15:06,382 INFO [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(1637): Serving as jenkins-hbase5.apache.org,38067,1689927305576, RpcServer on jenkins-hbase5.apache.org/172.31.10.131:38067, sessionid=0x101f28f12f20001 2023-07-21 08:15:06,383 DEBUG [RS:2;jenkins-hbase5:45973] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 08:15:06,383 DEBUG [RS:0;jenkins-hbase5:38067] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 08:15:06,383 DEBUG [RS:0;jenkins-hbase5:38067] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:06,383 DEBUG [RS:0;jenkins-hbase5:38067] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,38067,1689927305576' 2023-07-21 08:15:06,383 DEBUG [RS:0;jenkins-hbase5:38067] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 08:15:06,383 DEBUG [RS:1;jenkins-hbase5:40175] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 08:15:06,383 DEBUG [RS:2;jenkins-hbase5:45973] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 08:15:06,383 DEBUG [RS:2;jenkins-hbase5:45973] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 08:15:06,383 DEBUG [RS:2;jenkins-hbase5:45973] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:06,383 DEBUG [RS:0;jenkins-hbase5:38067] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 08:15:06,383 DEBUG [RS:2;jenkins-hbase5:45973] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,45973,1689927305899' 2023-07-21 08:15:06,383 DEBUG [RS:2;jenkins-hbase5:45973] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 08:15:06,383 DEBUG [RS:1;jenkins-hbase5:40175] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 08:15:06,383 DEBUG [RS:1;jenkins-hbase5:40175] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 08:15:06,383 DEBUG [RS:1;jenkins-hbase5:40175] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:06,383 DEBUG [RS:1;jenkins-hbase5:40175] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,40175,1689927305741' 2023-07-21 08:15:06,383 DEBUG [RS:1;jenkins-hbase5:40175] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 08:15:06,383 DEBUG [RS:0;jenkins-hbase5:38067] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 08:15:06,383 DEBUG [RS:0;jenkins-hbase5:38067] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 08:15:06,383 DEBUG [RS:2;jenkins-hbase5:45973] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 08:15:06,383 DEBUG [RS:0;jenkins-hbase5:38067] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:06,383 DEBUG [RS:0;jenkins-hbase5:38067] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,38067,1689927305576' 2023-07-21 08:15:06,384 DEBUG [RS:0;jenkins-hbase5:38067] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 08:15:06,384 DEBUG [RS:1;jenkins-hbase5:40175] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 08:15:06,384 DEBUG [RS:2;jenkins-hbase5:45973] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 08:15:06,384 DEBUG [RS:0;jenkins-hbase5:38067] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 08:15:06,384 DEBUG [RS:1;jenkins-hbase5:40175] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 08:15:06,384 INFO [RS:2;jenkins-hbase5:45973] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 08:15:06,384 INFO [RS:1;jenkins-hbase5:40175] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 08:15:06,384 DEBUG [RS:0;jenkins-hbase5:38067] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 08:15:06,384 INFO [RS:0;jenkins-hbase5:38067] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 08:15:06,386 INFO [RS:1;jenkins-hbase5:40175] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,386 INFO [RS:0;jenkins-hbase5:38067] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,386 INFO [RS:2;jenkins-hbase5:45973] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,387 DEBUG [RS:2;jenkins-hbase5:45973] zookeeper.ZKUtil(398): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 08:15:06,387 DEBUG [RS:0;jenkins-hbase5:38067] zookeeper.ZKUtil(398): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 08:15:06,387 DEBUG [RS:1;jenkins-hbase5:40175] zookeeper.ZKUtil(398): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 08:15:06,387 INFO [RS:0;jenkins-hbase5:38067] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 08:15:06,387 INFO [RS:2;jenkins-hbase5:45973] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 08:15:06,387 INFO [RS:1;jenkins-hbase5:40175] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 08:15:06,388 INFO [RS:2;jenkins-hbase5:45973] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,388 INFO [RS:0;jenkins-hbase5:38067] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,388 INFO [RS:1;jenkins-hbase5:40175] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,388 INFO [RS:2;jenkins-hbase5:45973] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,388 INFO [RS:0;jenkins-hbase5:38067] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,388 INFO [RS:1;jenkins-hbase5:40175] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,492 INFO [RS:1;jenkins-hbase5:40175] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C40175%2C1689927305741, suffix=, logDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/WALs/jenkins-hbase5.apache.org,40175,1689927305741, archiveDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/oldWALs, maxLogs=32 2023-07-21 08:15:06,492 INFO [RS:2;jenkins-hbase5:45973] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C45973%2C1689927305899, suffix=, logDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/WALs/jenkins-hbase5.apache.org,45973,1689927305899, archiveDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/oldWALs, maxLogs=32 2023-07-21 08:15:06,492 INFO [RS:0;jenkins-hbase5:38067] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C38067%2C1689927305576, suffix=, logDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/WALs/jenkins-hbase5.apache.org,38067,1689927305576, archiveDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/oldWALs, maxLogs=32 2023-07-21 08:15:06,516 DEBUG [jenkins-hbase5:43777] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 08:15:06,517 DEBUG [jenkins-hbase5:43777] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:15:06,517 DEBUG [jenkins-hbase5:43777] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:15:06,518 DEBUG [jenkins-hbase5:43777] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:15:06,518 DEBUG [jenkins-hbase5:43777] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:15:06,518 DEBUG [jenkins-hbase5:43777] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:15:06,518 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase5.apache.org,45973,1689927305899, state=OPENING 2023-07-21 08:15:06,524 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38089,DS-9e111182-2098-4d22-9473-041d1c4dbd65,DISK] 2023-07-21 08:15:06,524 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 08:15:06,526 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39573,DS-d223bb95-ba0b-412d-9e3f-42c53a5a2925,DISK] 2023-07-21 08:15:06,526 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38089,DS-9e111182-2098-4d22-9473-041d1c4dbd65,DISK] 2023-07-21 08:15:06,526 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33835,DS-8b3f7cc9-54a2-4805-baae-279bd2184780,DISK] 2023-07-21 08:15:06,527 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:06,527 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39573,DS-d223bb95-ba0b-412d-9e3f-42c53a5a2925,DISK] 2023-07-21 08:15:06,529 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38089,DS-9e111182-2098-4d22-9473-041d1c4dbd65,DISK] 2023-07-21 08:15:06,531 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase5.apache.org,45973,1689927305899}] 2023-07-21 08:15:06,531 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39573,DS-d223bb95-ba0b-412d-9e3f-42c53a5a2925,DISK] 2023-07-21 08:15:06,531 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33835,DS-8b3f7cc9-54a2-4805-baae-279bd2184780,DISK] 2023-07-21 08:15:06,532 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 08:15:06,533 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33835,DS-8b3f7cc9-54a2-4805-baae-279bd2184780,DISK] 2023-07-21 08:15:06,534 WARN [ReadOnlyZKClient-127.0.0.1:59333@0x3b9a8c23] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 08:15:06,534 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,43777,1689927305397] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:15:06,536 INFO [RS:2;jenkins-hbase5:45973] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/WALs/jenkins-hbase5.apache.org,45973,1689927305899/jenkins-hbase5.apache.org%2C45973%2C1689927305899.1689927306498 2023-07-21 08:15:06,536 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:47272, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:15:06,536 DEBUG [RS:2;jenkins-hbase5:45973] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33835,DS-8b3f7cc9-54a2-4805-baae-279bd2184780,DISK], DatanodeInfoWithStorage[127.0.0.1:39573,DS-d223bb95-ba0b-412d-9e3f-42c53a5a2925,DISK], DatanodeInfoWithStorage[127.0.0.1:38089,DS-9e111182-2098-4d22-9473-041d1c4dbd65,DISK]] 2023-07-21 08:15:06,537 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45973] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.10.131:47272 deadline: 1689927366536, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:06,537 INFO [RS:0;jenkins-hbase5:38067] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/WALs/jenkins-hbase5.apache.org,38067,1689927305576/jenkins-hbase5.apache.org%2C38067%2C1689927305576.1689927306498 2023-07-21 08:15:06,538 DEBUG [RS:0;jenkins-hbase5:38067] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33835,DS-8b3f7cc9-54a2-4805-baae-279bd2184780,DISK], DatanodeInfoWithStorage[127.0.0.1:38089,DS-9e111182-2098-4d22-9473-041d1c4dbd65,DISK], DatanodeInfoWithStorage[127.0.0.1:39573,DS-d223bb95-ba0b-412d-9e3f-42c53a5a2925,DISK]] 2023-07-21 08:15:06,540 INFO [RS:1;jenkins-hbase5:40175] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/WALs/jenkins-hbase5.apache.org,40175,1689927305741/jenkins-hbase5.apache.org%2C40175%2C1689927305741.1689927306503 2023-07-21 08:15:06,540 DEBUG [RS:1;jenkins-hbase5:40175] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33835,DS-8b3f7cc9-54a2-4805-baae-279bd2184780,DISK], DatanodeInfoWithStorage[127.0.0.1:39573,DS-d223bb95-ba0b-412d-9e3f-42c53a5a2925,DISK], DatanodeInfoWithStorage[127.0.0.1:38089,DS-9e111182-2098-4d22-9473-041d1c4dbd65,DISK]] 2023-07-21 08:15:06,688 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:06,690 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:15:06,691 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:47280, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:15:06,696 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 08:15:06,696 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:15:06,698 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C45973%2C1689927305899.meta, suffix=.meta, logDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/WALs/jenkins-hbase5.apache.org,45973,1689927305899, archiveDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/oldWALs, maxLogs=32 2023-07-21 08:15:06,713 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39573,DS-d223bb95-ba0b-412d-9e3f-42c53a5a2925,DISK] 2023-07-21 08:15:06,718 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38089,DS-9e111182-2098-4d22-9473-041d1c4dbd65,DISK] 2023-07-21 08:15:06,718 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33835,DS-8b3f7cc9-54a2-4805-baae-279bd2184780,DISK] 2023-07-21 08:15:06,727 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/WALs/jenkins-hbase5.apache.org,45973,1689927305899/jenkins-hbase5.apache.org%2C45973%2C1689927305899.meta.1689927306698.meta 2023-07-21 08:15:06,727 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39573,DS-d223bb95-ba0b-412d-9e3f-42c53a5a2925,DISK], DatanodeInfoWithStorage[127.0.0.1:33835,DS-8b3f7cc9-54a2-4805-baae-279bd2184780,DISK], DatanodeInfoWithStorage[127.0.0.1:38089,DS-9e111182-2098-4d22-9473-041d1c4dbd65,DISK]] 2023-07-21 08:15:06,727 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:15:06,728 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 08:15:06,728 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 08:15:06,728 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 08:15:06,728 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 08:15:06,728 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:06,728 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 08:15:06,728 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 08:15:06,729 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 08:15:06,731 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/info 2023-07-21 08:15:06,731 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/info 2023-07-21 08:15:06,731 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 08:15:06,732 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:06,732 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 08:15:06,733 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/rep_barrier 2023-07-21 08:15:06,733 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/rep_barrier 2023-07-21 08:15:06,733 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 08:15:06,734 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:06,734 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 08:15:06,735 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/table 2023-07-21 08:15:06,735 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/table 2023-07-21 08:15:06,735 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 08:15:06,736 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:06,737 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740 2023-07-21 08:15:06,738 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740 2023-07-21 08:15:06,741 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 08:15:06,742 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 08:15:06,743 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10438510560, jitterRate=-0.027837947010993958}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 08:15:06,743 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 08:15:06,743 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689927306688 2023-07-21 08:15:06,748 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 08:15:06,749 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 08:15:06,749 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase5.apache.org,45973,1689927305899, state=OPEN 2023-07-21 08:15:06,752 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 08:15:06,752 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 08:15:06,754 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 08:15:06,754 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase5.apache.org,45973,1689927305899 in 223 msec 2023-07-21 08:15:06,755 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 08:15:06,755 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 390 msec 2023-07-21 08:15:06,756 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 527 msec 2023-07-21 08:15:06,757 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689927306757, completionTime=-1 2023-07-21 08:15:06,757 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 08:15:06,757 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 08:15:06,761 INFO [master/jenkins-hbase5:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 08:15:06,761 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689927366761 2023-07-21 08:15:06,761 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689927426761 2023-07-21 08:15:06,761 INFO [master/jenkins-hbase5:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-21 08:15:06,766 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,43777,1689927305397-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,767 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,43777,1689927305397-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,767 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,43777,1689927305397-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,767 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase5:43777, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,767 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:06,767 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 08:15:06,767 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:06,768 DEBUG [master/jenkins-hbase5:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 08:15:06,768 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 08:15:06,769 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:15:06,770 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:15:06,772 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/hbase/namespace/7b0d94163fc29777533e27cbbe8cd3c4 2023-07-21 08:15:06,772 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/hbase/namespace/7b0d94163fc29777533e27cbbe8cd3c4 empty. 2023-07-21 08:15:06,773 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/hbase/namespace/7b0d94163fc29777533e27cbbe8cd3c4 2023-07-21 08:15:06,773 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 08:15:06,788 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 08:15:06,790 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7b0d94163fc29777533e27cbbe8cd3c4, NAME => 'hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp 2023-07-21 08:15:06,799 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:06,799 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 7b0d94163fc29777533e27cbbe8cd3c4, disabling compactions & flushes 2023-07-21 08:15:06,799 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. 2023-07-21 08:15:06,799 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. 2023-07-21 08:15:06,799 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. after waiting 0 ms 2023-07-21 08:15:06,799 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. 2023-07-21 08:15:06,799 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. 2023-07-21 08:15:06,799 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 7b0d94163fc29777533e27cbbe8cd3c4: 2023-07-21 08:15:06,801 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:15:06,802 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689927306802"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927306802"}]},"ts":"1689927306802"} 2023-07-21 08:15:06,804 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:15:06,805 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:15:06,806 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927306805"}]},"ts":"1689927306805"} 2023-07-21 08:15:06,807 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 08:15:06,811 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:15:06,811 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:15:06,811 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:15:06,811 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:15:06,811 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:15:06,811 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7b0d94163fc29777533e27cbbe8cd3c4, ASSIGN}] 2023-07-21 08:15:06,813 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7b0d94163fc29777533e27cbbe8cd3c4, ASSIGN 2023-07-21 08:15:06,814 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=7b0d94163fc29777533e27cbbe8cd3c4, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,38067,1689927305576; forceNewPlan=false, retain=false 2023-07-21 08:15:06,841 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,43777,1689927305397] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:06,847 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,43777,1689927305397] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 08:15:06,848 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:15:06,849 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:15:06,851 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/hbase/rsgroup/532fe73b2f88afcbdc29db6515522771 2023-07-21 08:15:06,852 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/hbase/rsgroup/532fe73b2f88afcbdc29db6515522771 empty. 2023-07-21 08:15:06,853 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/hbase/rsgroup/532fe73b2f88afcbdc29db6515522771 2023-07-21 08:15:06,853 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 08:15:06,872 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 08:15:06,873 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 532fe73b2f88afcbdc29db6515522771, NAME => 'hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp 2023-07-21 08:15:06,882 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:06,882 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 532fe73b2f88afcbdc29db6515522771, disabling compactions & flushes 2023-07-21 08:15:06,882 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. 2023-07-21 08:15:06,882 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. 2023-07-21 08:15:06,882 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. after waiting 0 ms 2023-07-21 08:15:06,882 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. 2023-07-21 08:15:06,882 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. 2023-07-21 08:15:06,882 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 532fe73b2f88afcbdc29db6515522771: 2023-07-21 08:15:06,884 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:15:06,885 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689927306885"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927306885"}]},"ts":"1689927306885"} 2023-07-21 08:15:06,886 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:15:06,887 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:15:06,887 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927306887"}]},"ts":"1689927306887"} 2023-07-21 08:15:06,888 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 08:15:06,891 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:15:06,891 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:15:06,891 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:15:06,892 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:15:06,892 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:15:06,892 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=532fe73b2f88afcbdc29db6515522771, ASSIGN}] 2023-07-21 08:15:06,895 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=532fe73b2f88afcbdc29db6515522771, ASSIGN 2023-07-21 08:15:06,896 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=532fe73b2f88afcbdc29db6515522771, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40175,1689927305741; forceNewPlan=false, retain=false 2023-07-21 08:15:06,896 INFO [jenkins-hbase5:43777] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-21 08:15:06,898 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=7b0d94163fc29777533e27cbbe8cd3c4, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:06,898 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689927306898"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927306898"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927306898"}]},"ts":"1689927306898"} 2023-07-21 08:15:06,898 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=532fe73b2f88afcbdc29db6515522771, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:06,898 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689927306898"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927306898"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927306898"}]},"ts":"1689927306898"} 2023-07-21 08:15:06,899 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 7b0d94163fc29777533e27cbbe8cd3c4, server=jenkins-hbase5.apache.org,38067,1689927305576}] 2023-07-21 08:15:06,900 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 532fe73b2f88afcbdc29db6515522771, server=jenkins-hbase5.apache.org,40175,1689927305741}] 2023-07-21 08:15:07,052 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:07,053 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:07,053 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:15:07,053 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:15:07,054 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:33002, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:15:07,054 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:54008, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:15:07,060 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. 2023-07-21 08:15:07,060 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7b0d94163fc29777533e27cbbe8cd3c4, NAME => 'hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:15:07,061 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 7b0d94163fc29777533e27cbbe8cd3c4 2023-07-21 08:15:07,061 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:07,061 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 7b0d94163fc29777533e27cbbe8cd3c4 2023-07-21 08:15:07,061 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 7b0d94163fc29777533e27cbbe8cd3c4 2023-07-21 08:15:07,064 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. 2023-07-21 08:15:07,064 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 532fe73b2f88afcbdc29db6515522771, NAME => 'hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:15:07,064 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 08:15:07,064 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. service=MultiRowMutationService 2023-07-21 08:15:07,064 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 08:15:07,064 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 532fe73b2f88afcbdc29db6515522771 2023-07-21 08:15:07,064 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:07,064 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 532fe73b2f88afcbdc29db6515522771 2023-07-21 08:15:07,064 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 532fe73b2f88afcbdc29db6515522771 2023-07-21 08:15:07,068 INFO [StoreOpener-7b0d94163fc29777533e27cbbe8cd3c4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7b0d94163fc29777533e27cbbe8cd3c4 2023-07-21 08:15:07,068 INFO [StoreOpener-532fe73b2f88afcbdc29db6515522771-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 532fe73b2f88afcbdc29db6515522771 2023-07-21 08:15:07,069 DEBUG [StoreOpener-7b0d94163fc29777533e27cbbe8cd3c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/namespace/7b0d94163fc29777533e27cbbe8cd3c4/info 2023-07-21 08:15:07,069 DEBUG [StoreOpener-7b0d94163fc29777533e27cbbe8cd3c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/namespace/7b0d94163fc29777533e27cbbe8cd3c4/info 2023-07-21 08:15:07,070 DEBUG [StoreOpener-532fe73b2f88afcbdc29db6515522771-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/rsgroup/532fe73b2f88afcbdc29db6515522771/m 2023-07-21 08:15:07,070 DEBUG [StoreOpener-532fe73b2f88afcbdc29db6515522771-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/rsgroup/532fe73b2f88afcbdc29db6515522771/m 2023-07-21 08:15:07,070 INFO [StoreOpener-7b0d94163fc29777533e27cbbe8cd3c4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7b0d94163fc29777533e27cbbe8cd3c4 columnFamilyName info 2023-07-21 08:15:07,070 INFO [StoreOpener-532fe73b2f88afcbdc29db6515522771-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 532fe73b2f88afcbdc29db6515522771 columnFamilyName m 2023-07-21 08:15:07,070 INFO [StoreOpener-7b0d94163fc29777533e27cbbe8cd3c4-1] regionserver.HStore(310): Store=7b0d94163fc29777533e27cbbe8cd3c4/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:07,070 INFO [StoreOpener-532fe73b2f88afcbdc29db6515522771-1] regionserver.HStore(310): Store=532fe73b2f88afcbdc29db6515522771/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:07,071 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/namespace/7b0d94163fc29777533e27cbbe8cd3c4 2023-07-21 08:15:07,071 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/rsgroup/532fe73b2f88afcbdc29db6515522771 2023-07-21 08:15:07,071 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/namespace/7b0d94163fc29777533e27cbbe8cd3c4 2023-07-21 08:15:07,072 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/rsgroup/532fe73b2f88afcbdc29db6515522771 2023-07-21 08:15:07,075 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 7b0d94163fc29777533e27cbbe8cd3c4 2023-07-21 08:15:07,076 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 532fe73b2f88afcbdc29db6515522771 2023-07-21 08:15:07,083 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/namespace/7b0d94163fc29777533e27cbbe8cd3c4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:07,083 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/rsgroup/532fe73b2f88afcbdc29db6515522771/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:07,084 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 532fe73b2f88afcbdc29db6515522771; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@23f42ed3, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:07,084 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 7b0d94163fc29777533e27cbbe8cd3c4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11449023520, jitterRate=0.06627340614795685}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:07,084 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 532fe73b2f88afcbdc29db6515522771: 2023-07-21 08:15:07,084 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 7b0d94163fc29777533e27cbbe8cd3c4: 2023-07-21 08:15:07,085 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771., pid=9, masterSystemTime=1689927307052 2023-07-21 08:15:07,087 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4., pid=8, masterSystemTime=1689927307052 2023-07-21 08:15:07,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. 2023-07-21 08:15:07,091 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. 2023-07-21 08:15:07,091 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=532fe73b2f88afcbdc29db6515522771, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:07,091 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689927307091"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927307091"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927307091"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927307091"}]},"ts":"1689927307091"} 2023-07-21 08:15:07,092 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. 2023-07-21 08:15:07,093 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. 2023-07-21 08:15:07,093 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=7b0d94163fc29777533e27cbbe8cd3c4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:07,093 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689927307093"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927307093"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927307093"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927307093"}]},"ts":"1689927307093"} 2023-07-21 08:15:07,095 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-21 08:15:07,095 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 532fe73b2f88afcbdc29db6515522771, server=jenkins-hbase5.apache.org,40175,1689927305741 in 193 msec 2023-07-21 08:15:07,097 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-21 08:15:07,097 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 7b0d94163fc29777533e27cbbe8cd3c4, server=jenkins-hbase5.apache.org,38067,1689927305576 in 196 msec 2023-07-21 08:15:07,097 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-21 08:15:07,097 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=532fe73b2f88afcbdc29db6515522771, ASSIGN in 203 msec 2023-07-21 08:15:07,098 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:15:07,098 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927307098"}]},"ts":"1689927307098"} 2023-07-21 08:15:07,099 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-21 08:15:07,099 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7b0d94163fc29777533e27cbbe8cd3c4, ASSIGN in 286 msec 2023-07-21 08:15:07,099 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:15:07,099 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927307099"}]},"ts":"1689927307099"} 2023-07-21 08:15:07,100 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 08:15:07,101 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 08:15:07,104 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:15:07,105 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:15:07,106 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 263 msec 2023-07-21 08:15:07,107 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 339 msec 2023-07-21 08:15:07,150 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,43777,1689927305397] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:15:07,151 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:54010, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:15:07,154 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,43777,1689927305397] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 08:15:07,154 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,43777,1689927305397] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 08:15:07,158 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:07,158 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,43777,1689927305397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:07,160 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,43777,1689927305397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 08:15:07,162 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,43777,1689927305397] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 08:15:07,169 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 08:15:07,170 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 08:15:07,170 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:07,173 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:15:07,174 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:33014, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:15:07,176 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 08:15:07,182 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 08:15:07,185 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-21 08:15:07,187 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 08:15:07,194 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 08:15:07,196 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 8 msec 2023-07-21 08:15:07,201 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 08:15:07,206 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 08:15:07,206 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.151sec 2023-07-21 08:15:07,206 INFO [master/jenkins-hbase5:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-21 08:15:07,206 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:07,207 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-21 08:15:07,207 INFO [master/jenkins-hbase5:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-21 08:15:07,208 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:15:07,209 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:15:07,210 INFO [master/jenkins-hbase5:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-21 08:15:07,210 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/hbase/quota/83e7aca4dc5ee3cc247bf52ca6add100 2023-07-21 08:15:07,211 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/hbase/quota/83e7aca4dc5ee3cc247bf52ca6add100 empty. 2023-07-21 08:15:07,211 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/hbase/quota/83e7aca4dc5ee3cc247bf52ca6add100 2023-07-21 08:15:07,211 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-21 08:15:07,214 INFO [master/jenkins-hbase5:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-21 08:15:07,214 INFO [master/jenkins-hbase5:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-21 08:15:07,216 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:07,217 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:07,217 INFO [master/jenkins-hbase5:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 08:15:07,217 INFO [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 08:15:07,217 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,43777,1689927305397-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 08:15:07,217 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,43777,1689927305397-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 08:15:07,217 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 08:15:07,223 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-21 08:15:07,224 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 83e7aca4dc5ee3cc247bf52ca6add100, NAME => 'hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp 2023-07-21 08:15:07,235 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:07,235 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 83e7aca4dc5ee3cc247bf52ca6add100, disabling compactions & flushes 2023-07-21 08:15:07,235 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. 2023-07-21 08:15:07,235 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. 2023-07-21 08:15:07,235 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. after waiting 0 ms 2023-07-21 08:15:07,235 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. 2023-07-21 08:15:07,235 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. 2023-07-21 08:15:07,235 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 83e7aca4dc5ee3cc247bf52ca6add100: 2023-07-21 08:15:07,237 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:15:07,238 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689927307238"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927307238"}]},"ts":"1689927307238"} 2023-07-21 08:15:07,239 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:15:07,240 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:15:07,240 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927307240"}]},"ts":"1689927307240"} 2023-07-21 08:15:07,241 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-21 08:15:07,244 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:15:07,244 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:15:07,244 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:15:07,244 DEBUG [Listener at localhost/44391] zookeeper.ReadOnlyZKClient(139): Connect 0x5a950151 to 127.0.0.1:59333 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:15:07,244 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:15:07,244 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:15:07,244 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=83e7aca4dc5ee3cc247bf52ca6add100, ASSIGN}] 2023-07-21 08:15:07,248 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=83e7aca4dc5ee3cc247bf52ca6add100, ASSIGN 2023-07-21 08:15:07,249 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=83e7aca4dc5ee3cc247bf52ca6add100, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,40175,1689927305741; forceNewPlan=false, retain=false 2023-07-21 08:15:07,251 DEBUG [Listener at localhost/44391] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7bcb3b7d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:15:07,252 DEBUG [hconnection-0x4f4a10d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:15:07,254 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:47284, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:15:07,255 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase5.apache.org,43777,1689927305397 2023-07-21 08:15:07,255 INFO [Listener at localhost/44391] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:07,257 DEBUG [Listener at localhost/44391] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 08:15:07,259 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:38318, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 08:15:07,262 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 08:15:07,262 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:07,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(492): Client=jenkins//172.31.10.131 set balanceSwitch=false 2023-07-21 08:15:07,263 DEBUG [Listener at localhost/44391] zookeeper.ReadOnlyZKClient(139): Connect 0x277c98ff to 127.0.0.1:59333 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:15:07,268 DEBUG [Listener at localhost/44391] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a1b9d4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:15:07,268 INFO [Listener at localhost/44391] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59333 2023-07-21 08:15:07,274 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:15:07,276 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101f28f12f2000a connected 2023-07-21 08:15:07,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.HMaster$15(3014): Client=jenkins//172.31.10.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-21 08:15:07,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-21 08:15:07,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-21 08:15:07,289 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 08:15:07,292 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 14 msec 2023-07-21 08:15:07,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-21 08:15:07,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.HMaster$4(2112): Client=jenkins//172.31.10.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:07,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-21 08:15:07,394 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:15:07,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(700): Client=jenkins//172.31.10.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-21 08:15:07,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 08:15:07,396 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:07,397 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 08:15:07,400 INFO [jenkins-hbase5:43777] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 08:15:07,401 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=83e7aca4dc5ee3cc247bf52ca6add100, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:07,401 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689927307401"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927307401"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927307401"}]},"ts":"1689927307401"} 2023-07-21 08:15:07,401 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:15:07,402 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 83e7aca4dc5ee3cc247bf52ca6add100, server=jenkins-hbase5.apache.org,40175,1689927305741}] 2023-07-21 08:15:07,403 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/np1/table1/cc616ea384b719c3481299e4243980af 2023-07-21 08:15:07,403 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/np1/table1/cc616ea384b719c3481299e4243980af empty. 2023-07-21 08:15:07,404 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/np1/table1/cc616ea384b719c3481299e4243980af 2023-07-21 08:15:07,404 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-21 08:15:07,420 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-21 08:15:07,421 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => cc616ea384b719c3481299e4243980af, NAME => 'np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp 2023-07-21 08:15:07,431 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:07,431 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing cc616ea384b719c3481299e4243980af, disabling compactions & flushes 2023-07-21 08:15:07,431 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af. 2023-07-21 08:15:07,432 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af. 2023-07-21 08:15:07,432 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af. after waiting 0 ms 2023-07-21 08:15:07,432 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af. 2023-07-21 08:15:07,432 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af. 2023-07-21 08:15:07,432 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for cc616ea384b719c3481299e4243980af: 2023-07-21 08:15:07,434 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:15:07,435 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689927307435"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927307435"}]},"ts":"1689927307435"} 2023-07-21 08:15:07,436 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:15:07,439 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:15:07,439 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927307439"}]},"ts":"1689927307439"} 2023-07-21 08:15:07,441 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-21 08:15:07,445 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:15:07,445 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:15:07,445 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:15:07,445 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:15:07,445 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:15:07,445 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=cc616ea384b719c3481299e4243980af, ASSIGN}] 2023-07-21 08:15:07,446 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=cc616ea384b719c3481299e4243980af, ASSIGN 2023-07-21 08:15:07,447 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=cc616ea384b719c3481299e4243980af, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,45973,1689927305899; forceNewPlan=false, retain=false 2023-07-21 08:15:07,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 08:15:07,559 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. 2023-07-21 08:15:07,559 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 83e7aca4dc5ee3cc247bf52ca6add100, NAME => 'hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:15:07,559 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 83e7aca4dc5ee3cc247bf52ca6add100 2023-07-21 08:15:07,560 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:07,560 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 83e7aca4dc5ee3cc247bf52ca6add100 2023-07-21 08:15:07,560 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 83e7aca4dc5ee3cc247bf52ca6add100 2023-07-21 08:15:07,562 INFO [StoreOpener-83e7aca4dc5ee3cc247bf52ca6add100-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 83e7aca4dc5ee3cc247bf52ca6add100 2023-07-21 08:15:07,563 DEBUG [StoreOpener-83e7aca4dc5ee3cc247bf52ca6add100-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/quota/83e7aca4dc5ee3cc247bf52ca6add100/q 2023-07-21 08:15:07,564 DEBUG [StoreOpener-83e7aca4dc5ee3cc247bf52ca6add100-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/quota/83e7aca4dc5ee3cc247bf52ca6add100/q 2023-07-21 08:15:07,564 INFO [StoreOpener-83e7aca4dc5ee3cc247bf52ca6add100-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 83e7aca4dc5ee3cc247bf52ca6add100 columnFamilyName q 2023-07-21 08:15:07,565 INFO [StoreOpener-83e7aca4dc5ee3cc247bf52ca6add100-1] regionserver.HStore(310): Store=83e7aca4dc5ee3cc247bf52ca6add100/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:07,565 INFO [StoreOpener-83e7aca4dc5ee3cc247bf52ca6add100-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 83e7aca4dc5ee3cc247bf52ca6add100 2023-07-21 08:15:07,567 DEBUG [StoreOpener-83e7aca4dc5ee3cc247bf52ca6add100-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/quota/83e7aca4dc5ee3cc247bf52ca6add100/u 2023-07-21 08:15:07,567 DEBUG [StoreOpener-83e7aca4dc5ee3cc247bf52ca6add100-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/quota/83e7aca4dc5ee3cc247bf52ca6add100/u 2023-07-21 08:15:07,567 INFO [StoreOpener-83e7aca4dc5ee3cc247bf52ca6add100-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 83e7aca4dc5ee3cc247bf52ca6add100 columnFamilyName u 2023-07-21 08:15:07,568 INFO [StoreOpener-83e7aca4dc5ee3cc247bf52ca6add100-1] regionserver.HStore(310): Store=83e7aca4dc5ee3cc247bf52ca6add100/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:07,569 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/quota/83e7aca4dc5ee3cc247bf52ca6add100 2023-07-21 08:15:07,569 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/quota/83e7aca4dc5ee3cc247bf52ca6add100 2023-07-21 08:15:07,572 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-21 08:15:07,573 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 83e7aca4dc5ee3cc247bf52ca6add100 2023-07-21 08:15:07,575 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/quota/83e7aca4dc5ee3cc247bf52ca6add100/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:07,575 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 83e7aca4dc5ee3cc247bf52ca6add100; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11206876800, jitterRate=0.04372173547744751}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-21 08:15:07,575 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 83e7aca4dc5ee3cc247bf52ca6add100: 2023-07-21 08:15:07,576 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100., pid=16, masterSystemTime=1689927307554 2023-07-21 08:15:07,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. 2023-07-21 08:15:07,577 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. 2023-07-21 08:15:07,578 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=83e7aca4dc5ee3cc247bf52ca6add100, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:07,578 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689927307578"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927307578"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927307578"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927307578"}]},"ts":"1689927307578"} 2023-07-21 08:15:07,581 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-21 08:15:07,581 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 83e7aca4dc5ee3cc247bf52ca6add100, server=jenkins-hbase5.apache.org,40175,1689927305741 in 177 msec 2023-07-21 08:15:07,583 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 08:15:07,583 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=83e7aca4dc5ee3cc247bf52ca6add100, ASSIGN in 337 msec 2023-07-21 08:15:07,585 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:15:07,585 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927307585"}]},"ts":"1689927307585"} 2023-07-21 08:15:07,587 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-21 08:15:07,590 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:15:07,592 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 384 msec 2023-07-21 08:15:07,597 INFO [jenkins-hbase5:43777] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 08:15:07,598 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=cc616ea384b719c3481299e4243980af, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:07,598 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689927307598"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927307598"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927307598"}]},"ts":"1689927307598"} 2023-07-21 08:15:07,600 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure cc616ea384b719c3481299e4243980af, server=jenkins-hbase5.apache.org,45973,1689927305899}] 2023-07-21 08:15:07,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 08:15:07,755 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af. 2023-07-21 08:15:07,755 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cc616ea384b719c3481299e4243980af, NAME => 'np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:15:07,755 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 cc616ea384b719c3481299e4243980af 2023-07-21 08:15:07,755 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:07,755 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for cc616ea384b719c3481299e4243980af 2023-07-21 08:15:07,755 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for cc616ea384b719c3481299e4243980af 2023-07-21 08:15:07,756 INFO [StoreOpener-cc616ea384b719c3481299e4243980af-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region cc616ea384b719c3481299e4243980af 2023-07-21 08:15:07,758 DEBUG [StoreOpener-cc616ea384b719c3481299e4243980af-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/np1/table1/cc616ea384b719c3481299e4243980af/fam1 2023-07-21 08:15:07,758 DEBUG [StoreOpener-cc616ea384b719c3481299e4243980af-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/np1/table1/cc616ea384b719c3481299e4243980af/fam1 2023-07-21 08:15:07,758 INFO [StoreOpener-cc616ea384b719c3481299e4243980af-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cc616ea384b719c3481299e4243980af columnFamilyName fam1 2023-07-21 08:15:07,759 INFO [StoreOpener-cc616ea384b719c3481299e4243980af-1] regionserver.HStore(310): Store=cc616ea384b719c3481299e4243980af/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:07,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/np1/table1/cc616ea384b719c3481299e4243980af 2023-07-21 08:15:07,760 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/np1/table1/cc616ea384b719c3481299e4243980af 2023-07-21 08:15:07,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for cc616ea384b719c3481299e4243980af 2023-07-21 08:15:07,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/np1/table1/cc616ea384b719c3481299e4243980af/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:07,764 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened cc616ea384b719c3481299e4243980af; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11634932160, jitterRate=0.08358749747276306}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:07,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for cc616ea384b719c3481299e4243980af: 2023-07-21 08:15:07,765 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af., pid=18, masterSystemTime=1689927307752 2023-07-21 08:15:07,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af. 2023-07-21 08:15:07,766 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af. 2023-07-21 08:15:07,767 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=cc616ea384b719c3481299e4243980af, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:07,767 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689927307766"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927307766"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927307766"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927307766"}]},"ts":"1689927307766"} 2023-07-21 08:15:07,769 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-21 08:15:07,769 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure cc616ea384b719c3481299e4243980af, server=jenkins-hbase5.apache.org,45973,1689927305899 in 168 msec 2023-07-21 08:15:07,771 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-21 08:15:07,771 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=cc616ea384b719c3481299e4243980af, ASSIGN in 324 msec 2023-07-21 08:15:07,771 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:15:07,771 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927307771"}]},"ts":"1689927307771"} 2023-07-21 08:15:07,772 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-21 08:15:07,774 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:15:07,775 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 384 msec 2023-07-21 08:15:07,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 08:15:07,999 INFO [Listener at localhost/44391] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-21 08:15:08,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.HMaster$4(2112): Client=jenkins//172.31.10.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:08,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-21 08:15:08,003 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:15:08,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(700): Client=jenkins//172.31.10.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-21 08:15:08,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 08:15:08,024 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=23 msec 2023-07-21 08:15:08,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 08:15:08,112 INFO [Listener at localhost/44391] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-21 08:15:08,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:08,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:08,114 INFO [Listener at localhost/44391] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-21 08:15:08,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.HMaster$11(2418): Client=jenkins//172.31.10.131 disable np1:table1 2023-07-21 08:15:08,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-21 08:15:08,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 08:15:08,118 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927308118"}]},"ts":"1689927308118"} 2023-07-21 08:15:08,119 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-21 08:15:08,120 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-21 08:15:08,121 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=cc616ea384b719c3481299e4243980af, UNASSIGN}] 2023-07-21 08:15:08,123 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=cc616ea384b719c3481299e4243980af, UNASSIGN 2023-07-21 08:15:08,124 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=cc616ea384b719c3481299e4243980af, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:08,124 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689927308124"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927308124"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927308124"}]},"ts":"1689927308124"} 2023-07-21 08:15:08,125 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure cc616ea384b719c3481299e4243980af, server=jenkins-hbase5.apache.org,45973,1689927305899}] 2023-07-21 08:15:08,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 08:15:08,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close cc616ea384b719c3481299e4243980af 2023-07-21 08:15:08,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing cc616ea384b719c3481299e4243980af, disabling compactions & flushes 2023-07-21 08:15:08,279 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af. 2023-07-21 08:15:08,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af. 2023-07-21 08:15:08,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af. after waiting 0 ms 2023-07-21 08:15:08,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af. 2023-07-21 08:15:08,283 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/np1/table1/cc616ea384b719c3481299e4243980af/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:15:08,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af. 2023-07-21 08:15:08,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for cc616ea384b719c3481299e4243980af: 2023-07-21 08:15:08,286 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed cc616ea384b719c3481299e4243980af 2023-07-21 08:15:08,286 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=cc616ea384b719c3481299e4243980af, regionState=CLOSED 2023-07-21 08:15:08,286 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689927308286"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927308286"}]},"ts":"1689927308286"} 2023-07-21 08:15:08,290 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-21 08:15:08,290 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure cc616ea384b719c3481299e4243980af, server=jenkins-hbase5.apache.org,45973,1689927305899 in 163 msec 2023-07-21 08:15:08,292 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-21 08:15:08,292 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=cc616ea384b719c3481299e4243980af, UNASSIGN in 169 msec 2023-07-21 08:15:08,293 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927308293"}]},"ts":"1689927308293"} 2023-07-21 08:15:08,295 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-21 08:15:08,296 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-21 08:15:08,298 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 182 msec 2023-07-21 08:15:08,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 08:15:08,420 INFO [Listener at localhost/44391] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-21 08:15:08,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.HMaster$5(2228): Client=jenkins//172.31.10.131 delete np1:table1 2023-07-21 08:15:08,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-21 08:15:08,423 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 08:15:08,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-21 08:15:08,424 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 08:15:08,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:08,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 08:15:08,428 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/np1/table1/cc616ea384b719c3481299e4243980af 2023-07-21 08:15:08,430 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/np1/table1/cc616ea384b719c3481299e4243980af/fam1, FileablePath, hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/np1/table1/cc616ea384b719c3481299e4243980af/recovered.edits] 2023-07-21 08:15:08,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 08:15:08,439 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/np1/table1/cc616ea384b719c3481299e4243980af/recovered.edits/4.seqid to hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/archive/data/np1/table1/cc616ea384b719c3481299e4243980af/recovered.edits/4.seqid 2023-07-21 08:15:08,439 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/.tmp/data/np1/table1/cc616ea384b719c3481299e4243980af 2023-07-21 08:15:08,440 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-21 08:15:08,442 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 08:15:08,443 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-21 08:15:08,448 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-21 08:15:08,449 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 08:15:08,449 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-21 08:15:08,449 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927308449"}]},"ts":"9223372036854775807"} 2023-07-21 08:15:08,451 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 08:15:08,451 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => cc616ea384b719c3481299e4243980af, NAME => 'np1:table1,,1689927307390.cc616ea384b719c3481299e4243980af.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 08:15:08,451 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-21 08:15:08,451 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689927308451"}]},"ts":"9223372036854775807"} 2023-07-21 08:15:08,452 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-21 08:15:08,455 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 08:15:08,457 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 34 msec 2023-07-21 08:15:08,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 08:15:08,533 INFO [Listener at localhost/44391] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-21 08:15:08,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.HMaster$17(3086): Client=jenkins//172.31.10.131 delete np1 2023-07-21 08:15:08,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-21 08:15:08,546 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 08:15:08,549 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 08:15:08,551 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 08:15:08,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 08:15:08,553 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-21 08:15:08,553 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 08:15:08,553 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 08:15:08,555 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 08:15:08,556 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 17 msec 2023-07-21 08:15:08,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43777] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 08:15:08,653 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 08:15:08,653 INFO [Listener at localhost/44391] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 08:15:08,653 DEBUG [Listener at localhost/44391] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5a950151 to 127.0.0.1:59333 2023-07-21 08:15:08,653 DEBUG [Listener at localhost/44391] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:08,654 DEBUG [Listener at localhost/44391] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 08:15:08,654 DEBUG [Listener at localhost/44391] util.JVMClusterUtil(257): Found active master hash=1720145454, stopped=false 2023-07-21 08:15:08,654 DEBUG [Listener at localhost/44391] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 08:15:08,654 DEBUG [Listener at localhost/44391] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 08:15:08,654 DEBUG [Listener at localhost/44391] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-21 08:15:08,654 INFO [Listener at localhost/44391] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase5.apache.org,43777,1689927305397 2023-07-21 08:15:08,656 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:08,656 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:08,656 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:08,656 INFO [Listener at localhost/44391] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 08:15:08,656 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:08,656 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:08,658 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:08,658 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:08,658 DEBUG [Listener at localhost/44391] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3b9a8c23 to 127.0.0.1:59333 2023-07-21 08:15:08,658 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:08,658 DEBUG [Listener at localhost/44391] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:08,658 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:08,659 INFO [Listener at localhost/44391] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,38067,1689927305576' ***** 2023-07-21 08:15:08,659 INFO [Listener at localhost/44391] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 08:15:08,659 INFO [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:08,659 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 08:15:08,660 INFO [Listener at localhost/44391] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,40175,1689927305741' ***** 2023-07-21 08:15:08,661 INFO [regionserver/jenkins-hbase5:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:08,662 INFO [Listener at localhost/44391] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 08:15:08,664 INFO [Listener at localhost/44391] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,45973,1689927305899' ***** 2023-07-21 08:15:08,665 INFO [Listener at localhost/44391] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 08:15:08,664 INFO [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:08,666 INFO [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:08,673 INFO [RS:0;jenkins-hbase5:38067] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@111c8d49{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:08,674 INFO [RS:1;jenkins-hbase5:40175] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@e320849{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:08,674 INFO [RS:0;jenkins-hbase5:38067] server.AbstractConnector(383): Stopped ServerConnector@3bed803e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:08,674 INFO [RS:2;jenkins-hbase5:45973] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@71504e91{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:08,674 INFO [RS:0;jenkins-hbase5:38067] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:08,674 INFO [RS:1;jenkins-hbase5:40175] server.AbstractConnector(383): Stopped ServerConnector@1e627738{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:08,675 INFO [RS:1;jenkins-hbase5:40175] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:08,675 INFO [RS:2;jenkins-hbase5:45973] server.AbstractConnector(383): Stopped ServerConnector@66d60559{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:08,678 INFO [RS:2;jenkins-hbase5:45973] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:08,678 INFO [RS:0;jenkins-hbase5:38067] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6acb7487{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:08,678 INFO [RS:1;jenkins-hbase5:40175] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@70c264{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:08,678 INFO [RS:0;jenkins-hbase5:38067] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@731f4564{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:08,678 INFO [RS:1;jenkins-hbase5:40175] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1da00a90{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:08,678 INFO [RS:2;jenkins-hbase5:45973] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2ee15bbe{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:08,678 INFO [RS:2;jenkins-hbase5:45973] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7257b488{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:08,679 INFO [RS:0;jenkins-hbase5:38067] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 08:15:08,679 INFO [RS:0;jenkins-hbase5:38067] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 08:15:08,679 INFO [RS:0;jenkins-hbase5:38067] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 08:15:08,679 INFO [RS:2;jenkins-hbase5:45973] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 08:15:08,679 INFO [RS:1;jenkins-hbase5:40175] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 08:15:08,679 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 08:15:08,679 INFO [RS:1;jenkins-hbase5:40175] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 08:15:08,681 INFO [RS:1;jenkins-hbase5:40175] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 08:15:08,681 INFO [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(3305): Received CLOSE for 532fe73b2f88afcbdc29db6515522771 2023-07-21 08:15:08,679 INFO [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(3305): Received CLOSE for 7b0d94163fc29777533e27cbbe8cd3c4 2023-07-21 08:15:08,681 INFO [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(3305): Received CLOSE for 83e7aca4dc5ee3cc247bf52ca6add100 2023-07-21 08:15:08,681 INFO [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:08,681 DEBUG [RS:1;jenkins-hbase5:40175] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x50d0c2c7 to 127.0.0.1:59333 2023-07-21 08:15:08,681 DEBUG [RS:1;jenkins-hbase5:40175] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:08,681 INFO [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 08:15:08,681 DEBUG [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(1478): Online Regions={532fe73b2f88afcbdc29db6515522771=hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771., 83e7aca4dc5ee3cc247bf52ca6add100=hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100.} 2023-07-21 08:15:08,681 DEBUG [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(1504): Waiting on 532fe73b2f88afcbdc29db6515522771, 83e7aca4dc5ee3cc247bf52ca6add100 2023-07-21 08:15:08,680 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 08:15:08,679 INFO [RS:2;jenkins-hbase5:45973] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 08:15:08,682 INFO [RS:2;jenkins-hbase5:45973] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 08:15:08,682 INFO [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:08,682 DEBUG [RS:2;jenkins-hbase5:45973] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5d7c0449 to 127.0.0.1:59333 2023-07-21 08:15:08,682 DEBUG [RS:2;jenkins-hbase5:45973] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:08,682 INFO [RS:2;jenkins-hbase5:45973] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 08:15:08,682 INFO [RS:2;jenkins-hbase5:45973] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 08:15:08,682 INFO [RS:2;jenkins-hbase5:45973] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 08:15:08,682 INFO [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 08:15:08,682 INFO [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:08,682 DEBUG [RS:0;jenkins-hbase5:38067] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3668b8b6 to 127.0.0.1:59333 2023-07-21 08:15:08,682 DEBUG [RS:0;jenkins-hbase5:38067] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:08,682 INFO [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 08:15:08,682 DEBUG [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(1478): Online Regions={7b0d94163fc29777533e27cbbe8cd3c4=hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4.} 2023-07-21 08:15:08,682 DEBUG [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(1504): Waiting on 7b0d94163fc29777533e27cbbe8cd3c4 2023-07-21 08:15:08,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 532fe73b2f88afcbdc29db6515522771, disabling compactions & flushes 2023-07-21 08:15:08,683 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. 2023-07-21 08:15:08,683 INFO [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 08:15:08,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. 2023-07-21 08:15:08,683 DEBUG [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-21 08:15:08,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. after waiting 0 ms 2023-07-21 08:15:08,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. 2023-07-21 08:15:08,684 DEBUG [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-21 08:15:08,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 7b0d94163fc29777533e27cbbe8cd3c4, disabling compactions & flushes 2023-07-21 08:15:08,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2745): Flushing 532fe73b2f88afcbdc29db6515522771 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-21 08:15:08,686 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 08:15:08,686 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 08:15:08,686 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 08:15:08,686 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 08:15:08,686 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 08:15:08,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. 2023-07-21 08:15:08,686 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-21 08:15:08,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. 2023-07-21 08:15:08,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. after waiting 0 ms 2023-07-21 08:15:08,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. 2023-07-21 08:15:08,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2745): Flushing 7b0d94163fc29777533e27cbbe8cd3c4 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-21 08:15:08,714 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/.tmp/info/dac1ebbcfa6940c8a513061d6c50d442 2023-07-21 08:15:08,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/namespace/7b0d94163fc29777533e27cbbe8cd3c4/.tmp/info/f1c0210d37ec4fa8a88d1ed604a5a68b 2023-07-21 08:15:08,720 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/rsgroup/532fe73b2f88afcbdc29db6515522771/.tmp/m/c53794148d21464a953288be7d826e01 2023-07-21 08:15:08,722 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dac1ebbcfa6940c8a513061d6c50d442 2023-07-21 08:15:08,723 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f1c0210d37ec4fa8a88d1ed604a5a68b 2023-07-21 08:15:08,724 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/namespace/7b0d94163fc29777533e27cbbe8cd3c4/.tmp/info/f1c0210d37ec4fa8a88d1ed604a5a68b as hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/namespace/7b0d94163fc29777533e27cbbe8cd3c4/info/f1c0210d37ec4fa8a88d1ed604a5a68b 2023-07-21 08:15:08,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/rsgroup/532fe73b2f88afcbdc29db6515522771/.tmp/m/c53794148d21464a953288be7d826e01 as hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/rsgroup/532fe73b2f88afcbdc29db6515522771/m/c53794148d21464a953288be7d826e01 2023-07-21 08:15:08,730 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f1c0210d37ec4fa8a88d1ed604a5a68b 2023-07-21 08:15:08,730 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/namespace/7b0d94163fc29777533e27cbbe8cd3c4/info/f1c0210d37ec4fa8a88d1ed604a5a68b, entries=3, sequenceid=8, filesize=5.0 K 2023-07-21 08:15:08,733 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 7b0d94163fc29777533e27cbbe8cd3c4 in 46ms, sequenceid=8, compaction requested=false 2023-07-21 08:15:08,733 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 08:15:08,734 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/.tmp/rep_barrier/018a12d863ac46a98b75d240f2eed861 2023-07-21 08:15:08,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/rsgroup/532fe73b2f88afcbdc29db6515522771/m/c53794148d21464a953288be7d826e01, entries=1, sequenceid=7, filesize=4.9 K 2023-07-21 08:15:08,740 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 532fe73b2f88afcbdc29db6515522771 in 56ms, sequenceid=7, compaction requested=false 2023-07-21 08:15:08,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 08:15:08,746 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 018a12d863ac46a98b75d240f2eed861 2023-07-21 08:15:08,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/namespace/7b0d94163fc29777533e27cbbe8cd3c4/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-21 08:15:08,749 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. 2023-07-21 08:15:08,750 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 7b0d94163fc29777533e27cbbe8cd3c4: 2023-07-21 08:15:08,750 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689927306767.7b0d94163fc29777533e27cbbe8cd3c4. 2023-07-21 08:15:08,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/rsgroup/532fe73b2f88afcbdc29db6515522771/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-21 08:15:08,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 08:15:08,753 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. 2023-07-21 08:15:08,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 532fe73b2f88afcbdc29db6515522771: 2023-07-21 08:15:08,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689927306840.532fe73b2f88afcbdc29db6515522771. 2023-07-21 08:15:08,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 83e7aca4dc5ee3cc247bf52ca6add100, disabling compactions & flushes 2023-07-21 08:15:08,753 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. 2023-07-21 08:15:08,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. 2023-07-21 08:15:08,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. after waiting 0 ms 2023-07-21 08:15:08,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. 2023-07-21 08:15:08,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/quota/83e7aca4dc5ee3cc247bf52ca6add100/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:15:08,758 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. 2023-07-21 08:15:08,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 83e7aca4dc5ee3cc247bf52ca6add100: 2023-07-21 08:15:08,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689927307206.83e7aca4dc5ee3cc247bf52ca6add100. 2023-07-21 08:15:08,760 INFO [regionserver/jenkins-hbase5:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:08,761 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/.tmp/table/7186e16a696745f09f7f4e897719df5d 2023-07-21 08:15:08,762 INFO [regionserver/jenkins-hbase5:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:08,768 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7186e16a696745f09f7f4e897719df5d 2023-07-21 08:15:08,769 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/.tmp/info/dac1ebbcfa6940c8a513061d6c50d442 as hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/info/dac1ebbcfa6940c8a513061d6c50d442 2023-07-21 08:15:08,774 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dac1ebbcfa6940c8a513061d6c50d442 2023-07-21 08:15:08,774 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/info/dac1ebbcfa6940c8a513061d6c50d442, entries=32, sequenceid=31, filesize=8.5 K 2023-07-21 08:15:08,775 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/.tmp/rep_barrier/018a12d863ac46a98b75d240f2eed861 as hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/rep_barrier/018a12d863ac46a98b75d240f2eed861 2023-07-21 08:15:08,781 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 018a12d863ac46a98b75d240f2eed861 2023-07-21 08:15:08,781 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/rep_barrier/018a12d863ac46a98b75d240f2eed861, entries=1, sequenceid=31, filesize=4.9 K 2023-07-21 08:15:08,782 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/.tmp/table/7186e16a696745f09f7f4e897719df5d as hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/table/7186e16a696745f09f7f4e897719df5d 2023-07-21 08:15:08,787 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7186e16a696745f09f7f4e897719df5d 2023-07-21 08:15:08,787 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/table/7186e16a696745f09f7f4e897719df5d, entries=8, sequenceid=31, filesize=5.2 K 2023-07-21 08:15:08,789 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 103ms, sequenceid=31, compaction requested=false 2023-07-21 08:15:08,789 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 08:15:08,803 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-21 08:15:08,803 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 08:15:08,804 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 08:15:08,804 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 08:15:08,804 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 08:15:08,882 INFO [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,40175,1689927305741; all regions closed. 2023-07-21 08:15:08,882 DEBUG [RS:1;jenkins-hbase5:40175] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 08:15:08,883 INFO [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,38067,1689927305576; all regions closed. 2023-07-21 08:15:08,883 DEBUG [RS:0;jenkins-hbase5:38067] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 08:15:08,884 INFO [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,45973,1689927305899; all regions closed. 2023-07-21 08:15:08,884 DEBUG [RS:2;jenkins-hbase5:45973] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 08:15:08,889 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/WALs/jenkins-hbase5.apache.org,38067,1689927305576/jenkins-hbase5.apache.org%2C38067%2C1689927305576.1689927306498 not finished, retry = 0 2023-07-21 08:15:08,890 DEBUG [RS:1;jenkins-hbase5:40175] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/oldWALs 2023-07-21 08:15:08,890 INFO [RS:1;jenkins-hbase5:40175] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C40175%2C1689927305741:(num 1689927306503) 2023-07-21 08:15:08,890 DEBUG [RS:1;jenkins-hbase5:40175] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:08,890 INFO [RS:1;jenkins-hbase5:40175] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:08,890 INFO [RS:1;jenkins-hbase5:40175] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase5:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 08:15:08,891 INFO [RS:1;jenkins-hbase5:40175] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 08:15:08,891 INFO [regionserver/jenkins-hbase5:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:08,891 INFO [RS:1;jenkins-hbase5:40175] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 08:15:08,891 INFO [RS:1;jenkins-hbase5:40175] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 08:15:08,891 INFO [RS:1;jenkins-hbase5:40175] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:40175 2023-07-21 08:15:08,895 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:08,895 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:08,895 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:08,895 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:08,895 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:08,895 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,40175,1689927305741 2023-07-21 08:15:08,895 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:08,895 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase5.apache.org,40175,1689927305741] 2023-07-21 08:15:08,895 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase5.apache.org,40175,1689927305741; numProcessing=1 2023-07-21 08:15:08,899 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase5.apache.org,40175,1689927305741 already deleted, retry=false 2023-07-21 08:15:08,899 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase5.apache.org,40175,1689927305741 expired; onlineServers=2 2023-07-21 08:15:08,899 DEBUG [RS:2;jenkins-hbase5:45973] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/oldWALs 2023-07-21 08:15:08,899 INFO [RS:2;jenkins-hbase5:45973] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C45973%2C1689927305899.meta:.meta(num 1689927306698) 2023-07-21 08:15:08,906 DEBUG [RS:2;jenkins-hbase5:45973] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/oldWALs 2023-07-21 08:15:08,906 INFO [RS:2;jenkins-hbase5:45973] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C45973%2C1689927305899:(num 1689927306498) 2023-07-21 08:15:08,906 DEBUG [RS:2;jenkins-hbase5:45973] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:08,907 INFO [RS:2;jenkins-hbase5:45973] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:08,907 INFO [RS:2;jenkins-hbase5:45973] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase5:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 08:15:08,907 INFO [regionserver/jenkins-hbase5:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:08,908 INFO [RS:2;jenkins-hbase5:45973] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:45973 2023-07-21 08:15:08,992 DEBUG [RS:0;jenkins-hbase5:38067] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/oldWALs 2023-07-21 08:15:08,992 INFO [RS:0;jenkins-hbase5:38067] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C38067%2C1689927305576:(num 1689927306498) 2023-07-21 08:15:08,992 DEBUG [RS:0;jenkins-hbase5:38067] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:08,992 INFO [RS:0;jenkins-hbase5:38067] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:08,992 INFO [RS:0;jenkins-hbase5:38067] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase5:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 08:15:08,993 INFO [RS:0;jenkins-hbase5:38067] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 08:15:08,993 INFO [regionserver/jenkins-hbase5:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:08,993 INFO [RS:0;jenkins-hbase5:38067] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 08:15:08,993 INFO [RS:0;jenkins-hbase5:38067] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 08:15:08,994 INFO [RS:0;jenkins-hbase5:38067] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:38067 2023-07-21 08:15:08,997 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:08,997 INFO [RS:1;jenkins-hbase5:40175] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,40175,1689927305741; zookeeper connection closed. 2023-07-21 08:15:08,997 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:40175-0x101f28f12f20002, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:08,998 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@52e4914e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@52e4914e 2023-07-21 08:15:08,999 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:08,999 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,38067,1689927305576 2023-07-21 08:15:08,999 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:08,999 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:08,999 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,45973,1689927305899 2023-07-21 08:15:09,000 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase5.apache.org,45973,1689927305899] 2023-07-21 08:15:09,000 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase5.apache.org,45973,1689927305899; numProcessing=2 2023-07-21 08:15:09,001 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase5.apache.org,45973,1689927305899 already deleted, retry=false 2023-07-21 08:15:09,001 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase5.apache.org,45973,1689927305899 expired; onlineServers=1 2023-07-21 08:15:09,001 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase5.apache.org,38067,1689927305576] 2023-07-21 08:15:09,001 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase5.apache.org,38067,1689927305576; numProcessing=3 2023-07-21 08:15:09,003 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase5.apache.org,38067,1689927305576 already deleted, retry=false 2023-07-21 08:15:09,003 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase5.apache.org,38067,1689927305576 expired; onlineServers=0 2023-07-21 08:15:09,003 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,43777,1689927305397' ***** 2023-07-21 08:15:09,003 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 08:15:09,004 DEBUG [M:0;jenkins-hbase5:43777] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62b8de26, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:15:09,004 INFO [M:0;jenkins-hbase5:43777] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:09,005 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 08:15:09,005 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:09,005 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:15:09,006 INFO [M:0;jenkins-hbase5:43777] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5fd2d473{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 08:15:09,006 INFO [M:0;jenkins-hbase5:43777] server.AbstractConnector(383): Stopped ServerConnector@2e304bdc{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:09,006 INFO [M:0;jenkins-hbase5:43777] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:09,006 INFO [M:0;jenkins-hbase5:43777] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39bac69f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:09,006 INFO [M:0;jenkins-hbase5:43777] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@26c901e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:09,007 INFO [M:0;jenkins-hbase5:43777] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,43777,1689927305397 2023-07-21 08:15:09,007 INFO [M:0;jenkins-hbase5:43777] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,43777,1689927305397; all regions closed. 2023-07-21 08:15:09,007 DEBUG [M:0;jenkins-hbase5:43777] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:09,007 INFO [M:0;jenkins-hbase5:43777] master.HMaster(1491): Stopping master jetty server 2023-07-21 08:15:09,008 INFO [M:0;jenkins-hbase5:43777] server.AbstractConnector(383): Stopped ServerConnector@1b684e42{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:09,008 DEBUG [M:0;jenkins-hbase5:43777] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 08:15:09,008 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 08:15:09,008 DEBUG [M:0;jenkins-hbase5:43777] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 08:15:09,008 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.small.0-1689927306254] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.small.0-1689927306254,5,FailOnTimeoutGroup] 2023-07-21 08:15:09,008 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.large.0-1689927306253] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.large.0-1689927306253,5,FailOnTimeoutGroup] 2023-07-21 08:15:09,008 INFO [M:0;jenkins-hbase5:43777] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 08:15:09,010 INFO [M:0;jenkins-hbase5:43777] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 08:15:09,010 INFO [M:0;jenkins-hbase5:43777] hbase.ChoreService(369): Chore service for: master/jenkins-hbase5:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 08:15:09,010 DEBUG [M:0;jenkins-hbase5:43777] master.HMaster(1512): Stopping service threads 2023-07-21 08:15:09,010 INFO [M:0;jenkins-hbase5:43777] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 08:15:09,010 ERROR [M:0;jenkins-hbase5:43777] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 08:15:09,011 INFO [M:0;jenkins-hbase5:43777] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 08:15:09,011 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 08:15:09,011 DEBUG [M:0;jenkins-hbase5:43777] zookeeper.ZKUtil(398): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 08:15:09,011 WARN [M:0;jenkins-hbase5:43777] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 08:15:09,011 INFO [M:0;jenkins-hbase5:43777] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 08:15:09,012 INFO [M:0;jenkins-hbase5:43777] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 08:15:09,012 DEBUG [M:0;jenkins-hbase5:43777] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 08:15:09,012 INFO [M:0;jenkins-hbase5:43777] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:09,012 DEBUG [M:0;jenkins-hbase5:43777] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:09,012 DEBUG [M:0;jenkins-hbase5:43777] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 08:15:09,012 DEBUG [M:0;jenkins-hbase5:43777] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:09,012 INFO [M:0;jenkins-hbase5:43777] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.00 KB heapSize=109.16 KB 2023-07-21 08:15:09,027 INFO [M:0;jenkins-hbase5:43777] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.00 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/51f2e6ba20e2444cb4df062101c86d0f 2023-07-21 08:15:09,032 DEBUG [M:0;jenkins-hbase5:43777] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/51f2e6ba20e2444cb4df062101c86d0f as hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/51f2e6ba20e2444cb4df062101c86d0f 2023-07-21 08:15:09,037 INFO [M:0;jenkins-hbase5:43777] regionserver.HStore(1080): Added hdfs://localhost:41921/user/jenkins/test-data/bf4086c8-0758-466d-9430-117eb5d81b2c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/51f2e6ba20e2444cb4df062101c86d0f, entries=24, sequenceid=194, filesize=12.4 K 2023-07-21 08:15:09,038 INFO [M:0;jenkins-hbase5:43777] regionserver.HRegion(2948): Finished flush of dataSize ~93.00 KB/95234, heapSize ~109.14 KB/111760, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=194, compaction requested=false 2023-07-21 08:15:09,039 INFO [M:0;jenkins-hbase5:43777] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:09,040 DEBUG [M:0;jenkins-hbase5:43777] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 08:15:09,043 INFO [M:0;jenkins-hbase5:43777] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 08:15:09,043 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:09,044 INFO [M:0;jenkins-hbase5:43777] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:43777 2023-07-21 08:15:09,045 DEBUG [M:0;jenkins-hbase5:43777] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase5.apache.org,43777,1689927305397 already deleted, retry=false 2023-07-21 08:15:09,156 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:09,156 INFO [M:0;jenkins-hbase5:43777] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,43777,1689927305397; zookeeper connection closed. 2023-07-21 08:15:09,156 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): master:43777-0x101f28f12f20000, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:09,257 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:09,257 INFO [RS:0;jenkins-hbase5:38067] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,38067,1689927305576; zookeeper connection closed. 2023-07-21 08:15:09,257 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101f28f12f20001, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:09,257 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4a8719c5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4a8719c5 2023-07-21 08:15:09,357 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:09,357 INFO [RS:2;jenkins-hbase5:45973] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,45973,1689927305899; zookeeper connection closed. 2023-07-21 08:15:09,357 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): regionserver:45973-0x101f28f12f20003, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:09,357 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@29ea11d4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@29ea11d4 2023-07-21 08:15:09,357 INFO [Listener at localhost/44391] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-21 08:15:09,358 WARN [Listener at localhost/44391] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 08:15:09,362 INFO [Listener at localhost/44391] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 08:15:09,468 WARN [BP-994054683-172.31.10.131-1689927304298 heartbeating to localhost/127.0.0.1:41921] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 08:15:09,468 WARN [BP-994054683-172.31.10.131-1689927304298 heartbeating to localhost/127.0.0.1:41921] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-994054683-172.31.10.131-1689927304298 (Datanode Uuid f83013c2-731e-430d-816f-3dae88c98489) service to localhost/127.0.0.1:41921 2023-07-21 08:15:09,469 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/cluster_21653f15-198a-5eb8-1a7e-fefc7deb1383/dfs/data/data5/current/BP-994054683-172.31.10.131-1689927304298] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:09,469 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/cluster_21653f15-198a-5eb8-1a7e-fefc7deb1383/dfs/data/data6/current/BP-994054683-172.31.10.131-1689927304298] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:09,472 WARN [Listener at localhost/44391] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 08:15:09,476 INFO [Listener at localhost/44391] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 08:15:09,580 WARN [BP-994054683-172.31.10.131-1689927304298 heartbeating to localhost/127.0.0.1:41921] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 08:15:09,580 WARN [BP-994054683-172.31.10.131-1689927304298 heartbeating to localhost/127.0.0.1:41921] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-994054683-172.31.10.131-1689927304298 (Datanode Uuid b35d9684-610e-4e23-9a0d-dcaa90ef9ab8) service to localhost/127.0.0.1:41921 2023-07-21 08:15:09,581 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/cluster_21653f15-198a-5eb8-1a7e-fefc7deb1383/dfs/data/data3/current/BP-994054683-172.31.10.131-1689927304298] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:09,581 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/cluster_21653f15-198a-5eb8-1a7e-fefc7deb1383/dfs/data/data4/current/BP-994054683-172.31.10.131-1689927304298] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:09,582 WARN [Listener at localhost/44391] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 08:15:09,594 INFO [Listener at localhost/44391] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 08:15:09,698 WARN [BP-994054683-172.31.10.131-1689927304298 heartbeating to localhost/127.0.0.1:41921] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 08:15:09,698 WARN [BP-994054683-172.31.10.131-1689927304298 heartbeating to localhost/127.0.0.1:41921] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-994054683-172.31.10.131-1689927304298 (Datanode Uuid da4245b8-b425-4e24-a633-b7a65263d49b) service to localhost/127.0.0.1:41921 2023-07-21 08:15:09,698 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/cluster_21653f15-198a-5eb8-1a7e-fefc7deb1383/dfs/data/data1/current/BP-994054683-172.31.10.131-1689927304298] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:09,699 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/cluster_21653f15-198a-5eb8-1a7e-fefc7deb1383/dfs/data/data2/current/BP-994054683-172.31.10.131-1689927304298] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:09,712 INFO [Listener at localhost/44391] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 08:15:09,828 INFO [Listener at localhost/44391] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 08:15:09,861 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-21 08:15:09,862 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 08:15:09,862 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/hadoop.log.dir so I do NOT create it in target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02 2023-07-21 08:15:09,862 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/78540638-49e7-0860-98eb-f3e6efa3bdfc/hadoop.tmp.dir so I do NOT create it in target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02 2023-07-21 08:15:09,862 INFO [Listener at localhost/44391] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0, deleteOnExit=true 2023-07-21 08:15:09,862 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 08:15:09,862 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/test.cache.data in system properties and HBase conf 2023-07-21 08:15:09,862 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 08:15:09,862 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/hadoop.log.dir in system properties and HBase conf 2023-07-21 08:15:09,862 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 08:15:09,862 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 08:15:09,862 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 08:15:09,863 DEBUG [Listener at localhost/44391] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 08:15:09,863 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 08:15:09,863 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 08:15:09,863 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 08:15:09,863 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 08:15:09,863 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 08:15:09,863 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 08:15:09,863 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 08:15:09,864 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 08:15:09,864 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 08:15:09,864 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/nfs.dump.dir in system properties and HBase conf 2023-07-21 08:15:09,864 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/java.io.tmpdir in system properties and HBase conf 2023-07-21 08:15:09,864 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 08:15:09,864 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 08:15:09,864 INFO [Listener at localhost/44391] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 08:15:09,868 WARN [Listener at localhost/44391] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 08:15:09,868 WARN [Listener at localhost/44391] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 08:15:09,912 WARN [Listener at localhost/44391] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 08:15:09,915 INFO [Listener at localhost/44391] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 08:15:09,919 INFO [Listener at localhost/44391] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/java.io.tmpdir/Jetty_localhost_36011_hdfs____t2n8on/webapp 2023-07-21 08:15:09,926 DEBUG [Listener at localhost/44391-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101f28f12f2000a, quorum=127.0.0.1:59333, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 08:15:09,926 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101f28f12f2000a, quorum=127.0.0.1:59333, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 08:15:10,013 INFO [Listener at localhost/44391] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36011 2023-07-21 08:15:10,017 WARN [Listener at localhost/44391] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 08:15:10,017 WARN [Listener at localhost/44391] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 08:15:10,071 WARN [Listener at localhost/43379] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 08:15:10,085 WARN [Listener at localhost/43379] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 08:15:10,087 WARN [Listener at localhost/43379] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 08:15:10,089 INFO [Listener at localhost/43379] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 08:15:10,094 INFO [Listener at localhost/43379] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/java.io.tmpdir/Jetty_localhost_35797_datanode____dunb6i/webapp 2023-07-21 08:15:10,193 INFO [Listener at localhost/43379] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35797 2023-07-21 08:15:10,203 WARN [Listener at localhost/34973] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 08:15:10,219 WARN [Listener at localhost/34973] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 08:15:10,221 WARN [Listener at localhost/34973] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 08:15:10,222 INFO [Listener at localhost/34973] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 08:15:10,225 INFO [Listener at localhost/34973] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/java.io.tmpdir/Jetty_localhost_45681_datanode____.m96xzf/webapp 2023-07-21 08:15:10,313 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9d6333151caed63d: Processing first storage report for DS-1ab313a6-3496-4477-9be5-4c4e547c782f from datanode 0073a575-61f0-4eae-9026-a6dfe1cb81d3 2023-07-21 08:15:10,313 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9d6333151caed63d: from storage DS-1ab313a6-3496-4477-9be5-4c4e547c782f node DatanodeRegistration(127.0.0.1:41537, datanodeUuid=0073a575-61f0-4eae-9026-a6dfe1cb81d3, infoPort=38993, infoSecurePort=0, ipcPort=34973, storageInfo=lv=-57;cid=testClusterID;nsid=1190573574;c=1689927309871), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:15:10,313 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9d6333151caed63d: Processing first storage report for DS-05d86a7f-1829-41df-9eef-d94c486ee1bb from datanode 0073a575-61f0-4eae-9026-a6dfe1cb81d3 2023-07-21 08:15:10,313 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9d6333151caed63d: from storage DS-05d86a7f-1829-41df-9eef-d94c486ee1bb node DatanodeRegistration(127.0.0.1:41537, datanodeUuid=0073a575-61f0-4eae-9026-a6dfe1cb81d3, infoPort=38993, infoSecurePort=0, ipcPort=34973, storageInfo=lv=-57;cid=testClusterID;nsid=1190573574;c=1689927309871), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:15:10,332 INFO [Listener at localhost/34973] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45681 2023-07-21 08:15:10,341 WARN [Listener at localhost/33019] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 08:15:10,370 WARN [Listener at localhost/33019] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 08:15:10,373 WARN [Listener at localhost/33019] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 08:15:10,375 INFO [Listener at localhost/33019] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 08:15:10,379 INFO [Listener at localhost/33019] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/java.io.tmpdir/Jetty_localhost_36487_datanode____a04iqf/webapp 2023-07-21 08:15:10,460 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd1e889f7a761b69b: Processing first storage report for DS-c33db604-e339-4e74-b00b-ff5e751d21dc from datanode fa80365e-a45c-4531-9d9b-8c96cdede68d 2023-07-21 08:15:10,461 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd1e889f7a761b69b: from storage DS-c33db604-e339-4e74-b00b-ff5e751d21dc node DatanodeRegistration(127.0.0.1:39077, datanodeUuid=fa80365e-a45c-4531-9d9b-8c96cdede68d, infoPort=41231, infoSecurePort=0, ipcPort=33019, storageInfo=lv=-57;cid=testClusterID;nsid=1190573574;c=1689927309871), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 08:15:10,461 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd1e889f7a761b69b: Processing first storage report for DS-f4ecb4f1-ce80-4c0f-a8b2-1087151b1c96 from datanode fa80365e-a45c-4531-9d9b-8c96cdede68d 2023-07-21 08:15:10,461 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd1e889f7a761b69b: from storage DS-f4ecb4f1-ce80-4c0f-a8b2-1087151b1c96 node DatanodeRegistration(127.0.0.1:39077, datanodeUuid=fa80365e-a45c-4531-9d9b-8c96cdede68d, infoPort=41231, infoSecurePort=0, ipcPort=33019, storageInfo=lv=-57;cid=testClusterID;nsid=1190573574;c=1689927309871), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:15:10,499 INFO [Listener at localhost/33019] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36487 2023-07-21 08:15:10,515 WARN [Listener at localhost/43371] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 08:15:10,616 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x663eb24fe0d4af1e: Processing first storage report for DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8 from datanode 6e1178d7-a133-4550-b089-aaf1db3cb31f 2023-07-21 08:15:10,616 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x663eb24fe0d4af1e: from storage DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8 node DatanodeRegistration(127.0.0.1:33961, datanodeUuid=6e1178d7-a133-4550-b089-aaf1db3cb31f, infoPort=37871, infoSecurePort=0, ipcPort=43371, storageInfo=lv=-57;cid=testClusterID;nsid=1190573574;c=1689927309871), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 08:15:10,617 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x663eb24fe0d4af1e: Processing first storage report for DS-a858c476-0d49-46b7-b4d5-afbd6bf3d0fb from datanode 6e1178d7-a133-4550-b089-aaf1db3cb31f 2023-07-21 08:15:10,617 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x663eb24fe0d4af1e: from storage DS-a858c476-0d49-46b7-b4d5-afbd6bf3d0fb node DatanodeRegistration(127.0.0.1:33961, datanodeUuid=6e1178d7-a133-4550-b089-aaf1db3cb31f, infoPort=37871, infoSecurePort=0, ipcPort=43371, storageInfo=lv=-57;cid=testClusterID;nsid=1190573574;c=1689927309871), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 08:15:10,628 DEBUG [Listener at localhost/43371] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02 2023-07-21 08:15:10,631 INFO [Listener at localhost/43371] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/zookeeper_0, clientPort=59078, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 08:15:10,632 INFO [Listener at localhost/43371] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59078 2023-07-21 08:15:10,632 INFO [Listener at localhost/43371] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:10,633 INFO [Listener at localhost/43371] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:10,648 INFO [Listener at localhost/43371] util.FSUtils(471): Created version file at hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5 with version=8 2023-07-21 08:15:10,648 INFO [Listener at localhost/43371] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40383/user/jenkins/test-data/be3e7cb2-f9b5-8b7f-7631-825c9c07896b/hbase-staging 2023-07-21 08:15:10,649 DEBUG [Listener at localhost/43371] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 08:15:10,649 DEBUG [Listener at localhost/43371] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 08:15:10,649 DEBUG [Listener at localhost/43371] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 08:15:10,649 DEBUG [Listener at localhost/43371] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 08:15:10,650 INFO [Listener at localhost/43371] client.ConnectionUtils(127): master/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:15:10,650 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:10,650 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:10,650 INFO [Listener at localhost/43371] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:15:10,650 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:10,650 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:15:10,650 INFO [Listener at localhost/43371] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:15:10,651 INFO [Listener at localhost/43371] ipc.NettyRpcServer(120): Bind to /172.31.10.131:40455 2023-07-21 08:15:10,651 INFO [Listener at localhost/43371] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:10,652 INFO [Listener at localhost/43371] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:10,653 INFO [Listener at localhost/43371] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40455 connecting to ZooKeeper ensemble=127.0.0.1:59078 2023-07-21 08:15:10,660 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:404550x0, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:15:10,661 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40455-0x101f28f27810000 connected 2023-07-21 08:15:10,674 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(164): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:15:10,674 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(164): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:10,675 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(164): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:15:10,675 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40455 2023-07-21 08:15:10,675 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40455 2023-07-21 08:15:10,675 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40455 2023-07-21 08:15:10,676 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40455 2023-07-21 08:15:10,676 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40455 2023-07-21 08:15:10,678 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:15:10,678 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:15:10,678 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:15:10,678 INFO [Listener at localhost/43371] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 08:15:10,678 INFO [Listener at localhost/43371] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:15:10,678 INFO [Listener at localhost/43371] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:15:10,679 INFO [Listener at localhost/43371] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:15:10,679 INFO [Listener at localhost/43371] http.HttpServer(1146): Jetty bound to port 39289 2023-07-21 08:15:10,679 INFO [Listener at localhost/43371] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:15:10,681 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:10,681 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@65704643{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:15:10,681 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:10,681 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@760f93bc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:15:10,796 INFO [Listener at localhost/43371] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:15:10,797 INFO [Listener at localhost/43371] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:15:10,797 INFO [Listener at localhost/43371] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:15:10,797 INFO [Listener at localhost/43371] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 08:15:10,798 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:10,799 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2c2d406c{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/java.io.tmpdir/jetty-0_0_0_0-39289-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4418534768563789206/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 08:15:10,800 INFO [Listener at localhost/43371] server.AbstractConnector(333): Started ServerConnector@2c973153{HTTP/1.1, (http/1.1)}{0.0.0.0:39289} 2023-07-21 08:15:10,801 INFO [Listener at localhost/43371] server.Server(415): Started @41013ms 2023-07-21 08:15:10,801 INFO [Listener at localhost/43371] master.HMaster(444): hbase.rootdir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5, hbase.cluster.distributed=false 2023-07-21 08:15:10,815 INFO [Listener at localhost/43371] client.ConnectionUtils(127): regionserver/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:15:10,816 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:10,816 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:10,816 INFO [Listener at localhost/43371] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:15:10,816 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:10,816 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:15:10,816 INFO [Listener at localhost/43371] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:15:10,817 INFO [Listener at localhost/43371] ipc.NettyRpcServer(120): Bind to /172.31.10.131:43707 2023-07-21 08:15:10,817 INFO [Listener at localhost/43371] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 08:15:10,818 DEBUG [Listener at localhost/43371] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 08:15:10,819 INFO [Listener at localhost/43371] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:10,820 INFO [Listener at localhost/43371] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:10,821 INFO [Listener at localhost/43371] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43707 connecting to ZooKeeper ensemble=127.0.0.1:59078 2023-07-21 08:15:10,824 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:437070x0, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:15:10,826 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(164): regionserver:437070x0, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:15:10,826 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(164): regionserver:437070x0, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:10,826 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(164): regionserver:437070x0, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:15:10,827 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43707-0x101f28f27810001 connected 2023-07-21 08:15:10,828 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43707 2023-07-21 08:15:10,828 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43707 2023-07-21 08:15:10,828 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43707 2023-07-21 08:15:10,828 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43707 2023-07-21 08:15:10,829 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43707 2023-07-21 08:15:10,830 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:15:10,830 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:15:10,831 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:15:10,831 INFO [Listener at localhost/43371] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 08:15:10,831 INFO [Listener at localhost/43371] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:15:10,831 INFO [Listener at localhost/43371] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:15:10,831 INFO [Listener at localhost/43371] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:15:10,832 INFO [Listener at localhost/43371] http.HttpServer(1146): Jetty bound to port 46631 2023-07-21 08:15:10,832 INFO [Listener at localhost/43371] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:15:10,833 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:10,833 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7dbd2218{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:15:10,834 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:10,834 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1be9377a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:15:11,006 INFO [Listener at localhost/43371] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:15:11,007 INFO [Listener at localhost/43371] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:15:11,007 INFO [Listener at localhost/43371] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:15:11,007 INFO [Listener at localhost/43371] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 08:15:11,008 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:11,009 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1ace3e95{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/java.io.tmpdir/jetty-0_0_0_0-46631-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8306433754879365945/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:11,010 INFO [Listener at localhost/43371] server.AbstractConnector(333): Started ServerConnector@20265f8f{HTTP/1.1, (http/1.1)}{0.0.0.0:46631} 2023-07-21 08:15:11,010 INFO [Listener at localhost/43371] server.Server(415): Started @41223ms 2023-07-21 08:15:11,022 INFO [Listener at localhost/43371] client.ConnectionUtils(127): regionserver/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:15:11,022 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:11,022 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:11,022 INFO [Listener at localhost/43371] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:15:11,022 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:11,022 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:15:11,022 INFO [Listener at localhost/43371] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:15:11,023 INFO [Listener at localhost/43371] ipc.NettyRpcServer(120): Bind to /172.31.10.131:35687 2023-07-21 08:15:11,023 INFO [Listener at localhost/43371] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 08:15:11,024 DEBUG [Listener at localhost/43371] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 08:15:11,025 INFO [Listener at localhost/43371] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:11,026 INFO [Listener at localhost/43371] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:11,027 INFO [Listener at localhost/43371] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35687 connecting to ZooKeeper ensemble=127.0.0.1:59078 2023-07-21 08:15:11,031 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:356870x0, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:15:11,032 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(164): regionserver:356870x0, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:15:11,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35687-0x101f28f27810002 connected 2023-07-21 08:15:11,033 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(164): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:11,033 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(164): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:15:11,033 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35687 2023-07-21 08:15:11,034 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35687 2023-07-21 08:15:11,034 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35687 2023-07-21 08:15:11,035 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35687 2023-07-21 08:15:11,036 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35687 2023-07-21 08:15:11,037 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:15:11,037 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:15:11,037 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:15:11,038 INFO [Listener at localhost/43371] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 08:15:11,038 INFO [Listener at localhost/43371] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:15:11,038 INFO [Listener at localhost/43371] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:15:11,038 INFO [Listener at localhost/43371] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:15:11,038 INFO [Listener at localhost/43371] http.HttpServer(1146): Jetty bound to port 36617 2023-07-21 08:15:11,039 INFO [Listener at localhost/43371] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:15:11,040 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:11,040 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4cdfbf9d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:15:11,040 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:11,040 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2657b892{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:15:11,153 INFO [Listener at localhost/43371] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:15:11,154 INFO [Listener at localhost/43371] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:15:11,154 INFO [Listener at localhost/43371] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:15:11,154 INFO [Listener at localhost/43371] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 08:15:11,155 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:11,156 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7aeb7de4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/java.io.tmpdir/jetty-0_0_0_0-36617-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3134514578106472501/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:11,158 INFO [Listener at localhost/43371] server.AbstractConnector(333): Started ServerConnector@4e2832c6{HTTP/1.1, (http/1.1)}{0.0.0.0:36617} 2023-07-21 08:15:11,158 INFO [Listener at localhost/43371] server.Server(415): Started @41371ms 2023-07-21 08:15:11,170 INFO [Listener at localhost/43371] client.ConnectionUtils(127): regionserver/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:15:11,170 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:11,170 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:11,170 INFO [Listener at localhost/43371] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:15:11,171 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:11,171 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:15:11,171 INFO [Listener at localhost/43371] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:15:11,172 INFO [Listener at localhost/43371] ipc.NettyRpcServer(120): Bind to /172.31.10.131:45347 2023-07-21 08:15:11,172 INFO [Listener at localhost/43371] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 08:15:11,173 DEBUG [Listener at localhost/43371] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 08:15:11,174 INFO [Listener at localhost/43371] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:11,174 INFO [Listener at localhost/43371] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:11,175 INFO [Listener at localhost/43371] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45347 connecting to ZooKeeper ensemble=127.0.0.1:59078 2023-07-21 08:15:11,179 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:453470x0, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:15:11,180 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(164): regionserver:453470x0, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:15:11,180 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45347-0x101f28f27810003 connected 2023-07-21 08:15:11,180 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(164): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:11,181 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(164): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:15:11,181 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45347 2023-07-21 08:15:11,182 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45347 2023-07-21 08:15:11,182 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45347 2023-07-21 08:15:11,183 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45347 2023-07-21 08:15:11,183 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45347 2023-07-21 08:15:11,185 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:15:11,186 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:15:11,186 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:15:11,186 INFO [Listener at localhost/43371] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 08:15:11,186 INFO [Listener at localhost/43371] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:15:11,186 INFO [Listener at localhost/43371] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:15:11,186 INFO [Listener at localhost/43371] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:15:11,187 INFO [Listener at localhost/43371] http.HttpServer(1146): Jetty bound to port 44255 2023-07-21 08:15:11,187 INFO [Listener at localhost/43371] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:15:11,188 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:11,188 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@352219ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:15:11,189 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:11,189 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6c4d16d0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:15:11,303 INFO [Listener at localhost/43371] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:15:11,304 INFO [Listener at localhost/43371] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:15:11,304 INFO [Listener at localhost/43371] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:15:11,304 INFO [Listener at localhost/43371] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 08:15:11,305 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:11,306 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@63be1b37{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/java.io.tmpdir/jetty-0_0_0_0-44255-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5393393025700419571/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:11,307 INFO [Listener at localhost/43371] server.AbstractConnector(333): Started ServerConnector@390d766a{HTTP/1.1, (http/1.1)}{0.0.0.0:44255} 2023-07-21 08:15:11,308 INFO [Listener at localhost/43371] server.Server(415): Started @41521ms 2023-07-21 08:15:11,310 INFO [master/jenkins-hbase5:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:15:11,313 INFO [master/jenkins-hbase5:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@32c4cc7c{HTTP/1.1, (http/1.1)}{0.0.0.0:33967} 2023-07-21 08:15:11,313 INFO [master/jenkins-hbase5:0:becomeActiveMaster] server.Server(415): Started @41526ms 2023-07-21 08:15:11,313 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase5.apache.org,40455,1689927310649 2023-07-21 08:15:11,314 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 08:15:11,315 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase5.apache.org,40455,1689927310649 2023-07-21 08:15:11,316 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 08:15:11,316 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 08:15:11,316 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 08:15:11,316 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 08:15:11,316 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:11,318 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 08:15:11,319 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase5.apache.org,40455,1689927310649 from backup master directory 2023-07-21 08:15:11,319 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 08:15:11,321 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase5.apache.org,40455,1689927310649 2023-07-21 08:15:11,321 WARN [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:15:11,321 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 08:15:11,321 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase5.apache.org,40455,1689927310649 2023-07-21 08:15:11,336 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/hbase.id with ID: da7dbb03-33c5-4538-882e-3cf363969d3d 2023-07-21 08:15:11,346 INFO [master/jenkins-hbase5:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:11,348 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:11,359 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x46b5b40c to 127.0.0.1:59078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:15:11,369 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@51a8c060, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:15:11,369 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:11,370 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 08:15:11,370 INFO [master/jenkins-hbase5:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:15:11,371 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData/data/master/store-tmp 2023-07-21 08:15:11,381 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:11,381 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 08:15:11,381 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:11,381 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:11,381 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 08:15:11,381 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:11,381 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:11,381 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 08:15:11,382 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData/WALs/jenkins-hbase5.apache.org,40455,1689927310649 2023-07-21 08:15:11,384 INFO [master/jenkins-hbase5:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C40455%2C1689927310649, suffix=, logDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData/WALs/jenkins-hbase5.apache.org,40455,1689927310649, archiveDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData/oldWALs, maxLogs=10 2023-07-21 08:15:11,401 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33961,DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8,DISK] 2023-07-21 08:15:11,401 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39077,DS-c33db604-e339-4e74-b00b-ff5e751d21dc,DISK] 2023-07-21 08:15:11,401 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41537,DS-1ab313a6-3496-4477-9be5-4c4e547c782f,DISK] 2023-07-21 08:15:11,403 INFO [master/jenkins-hbase5:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData/WALs/jenkins-hbase5.apache.org,40455,1689927310649/jenkins-hbase5.apache.org%2C40455%2C1689927310649.1689927311384 2023-07-21 08:15:11,403 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39077,DS-c33db604-e339-4e74-b00b-ff5e751d21dc,DISK], DatanodeInfoWithStorage[127.0.0.1:41537,DS-1ab313a6-3496-4477-9be5-4c4e547c782f,DISK], DatanodeInfoWithStorage[127.0.0.1:33961,DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8,DISK]] 2023-07-21 08:15:11,403 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:15:11,403 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:11,403 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:15:11,403 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:15:11,404 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:15:11,406 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 08:15:11,406 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 08:15:11,406 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:11,407 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:15:11,407 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:15:11,410 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 08:15:11,412 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:11,412 INFO [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9807815840, jitterRate=-0.08657597005367279}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:11,413 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 08:15:11,416 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 08:15:11,417 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 08:15:11,417 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 08:15:11,417 INFO [master/jenkins-hbase5:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 08:15:11,418 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 08:15:11,418 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-21 08:15:11,418 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 08:15:11,419 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 08:15:11,420 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 08:15:11,421 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 08:15:11,421 INFO [master/jenkins-hbase5:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 08:15:11,421 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 08:15:11,425 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:11,426 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 08:15:11,426 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 08:15:11,427 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 08:15:11,433 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:11,433 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:11,433 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:11,433 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:11,433 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:11,437 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase5.apache.org,40455,1689927310649, sessionid=0x101f28f27810000, setting cluster-up flag (Was=false) 2023-07-21 08:15:11,439 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:11,444 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 08:15:11,445 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase5.apache.org,40455,1689927310649 2023-07-21 08:15:11,448 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:11,453 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 08:15:11,454 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase5.apache.org,40455,1689927310649 2023-07-21 08:15:11,455 WARN [master/jenkins-hbase5:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.hbase-snapshot/.tmp 2023-07-21 08:15:11,456 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 08:15:11,456 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 08:15:11,459 INFO [master/jenkins-hbase5:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 08:15:11,460 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:15:11,460 INFO [master/jenkins-hbase5:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 08:15:11,461 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 08:15:11,473 INFO [master/jenkins-hbase5:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 08:15:11,473 INFO [master/jenkins-hbase5:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 08:15:11,473 INFO [master/jenkins-hbase5:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 08:15:11,473 INFO [master/jenkins-hbase5:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 08:15:11,473 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase5:0, corePoolSize=5, maxPoolSize=5 2023-07-21 08:15:11,473 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase5:0, corePoolSize=5, maxPoolSize=5 2023-07-21 08:15:11,473 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=5, maxPoolSize=5 2023-07-21 08:15:11,473 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=5, maxPoolSize=5 2023-07-21 08:15:11,473 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase5:0, corePoolSize=10, maxPoolSize=10 2023-07-21 08:15:11,473 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,473 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:15:11,473 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,476 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689927341476 2023-07-21 08:15:11,477 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 08:15:11,477 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 08:15:11,477 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 08:15:11,477 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 08:15:11,477 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 08:15:11,477 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 08:15:11,480 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,480 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 08:15:11,480 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 08:15:11,481 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 08:15:11,481 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 08:15:11,481 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 08:15:11,483 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 08:15:11,483 INFO [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 08:15:11,483 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.large.0-1689927311483,5,FailOnTimeoutGroup] 2023-07-21 08:15:11,483 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.small.0-1689927311483,5,FailOnTimeoutGroup] 2023-07-21 08:15:11,483 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,484 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 08:15:11,484 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,484 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,484 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:11,497 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 08:15:11,498 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 08:15:11,498 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5 2023-07-21 08:15:11,505 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:11,506 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 08:15:11,507 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/info 2023-07-21 08:15:11,508 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 08:15:11,508 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:11,508 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 08:15:11,510 INFO [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(951): ClusterId : da7dbb03-33c5-4538-882e-3cf363969d3d 2023-07-21 08:15:11,510 INFO [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(951): ClusterId : da7dbb03-33c5-4538-882e-3cf363969d3d 2023-07-21 08:15:11,510 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/rep_barrier 2023-07-21 08:15:11,510 DEBUG [RS:0;jenkins-hbase5:43707] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 08:15:11,510 INFO [RS:2;jenkins-hbase5:45347] regionserver.HRegionServer(951): ClusterId : da7dbb03-33c5-4538-882e-3cf363969d3d 2023-07-21 08:15:11,510 DEBUG [RS:2;jenkins-hbase5:45347] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 08:15:11,510 DEBUG [RS:1;jenkins-hbase5:35687] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 08:15:11,511 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 08:15:11,511 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:11,511 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 08:15:11,513 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/table 2023-07-21 08:15:11,513 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 08:15:11,513 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:11,514 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740 2023-07-21 08:15:11,514 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740 2023-07-21 08:15:11,515 DEBUG [RS:0;jenkins-hbase5:43707] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 08:15:11,515 DEBUG [RS:0;jenkins-hbase5:43707] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 08:15:11,516 DEBUG [RS:2;jenkins-hbase5:45347] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 08:15:11,516 DEBUG [RS:2;jenkins-hbase5:45347] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 08:15:11,517 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 08:15:11,517 DEBUG [RS:1;jenkins-hbase5:35687] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 08:15:11,517 DEBUG [RS:1;jenkins-hbase5:35687] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 08:15:11,518 DEBUG [RS:0;jenkins-hbase5:43707] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 08:15:11,519 DEBUG [RS:0;jenkins-hbase5:43707] zookeeper.ReadOnlyZKClient(139): Connect 0x3fba5411 to 127.0.0.1:59078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:15:11,520 DEBUG [RS:2;jenkins-hbase5:45347] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 08:15:11,520 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 08:15:11,520 DEBUG [RS:1;jenkins-hbase5:35687] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 08:15:11,526 DEBUG [RS:2;jenkins-hbase5:45347] zookeeper.ReadOnlyZKClient(139): Connect 0x6c970812 to 127.0.0.1:59078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:15:11,526 DEBUG [RS:1;jenkins-hbase5:35687] zookeeper.ReadOnlyZKClient(139): Connect 0x6d351ff9 to 127.0.0.1:59078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:15:11,528 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:11,531 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11095516960, jitterRate=0.033350542187690735}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 08:15:11,531 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 08:15:11,531 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 08:15:11,531 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 08:15:11,531 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 08:15:11,532 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 08:15:11,532 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 08:15:11,533 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 08:15:11,533 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 08:15:11,534 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 08:15:11,534 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 08:15:11,534 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 08:15:11,535 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 08:15:11,536 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 08:15:11,538 DEBUG [RS:0;jenkins-hbase5:43707] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@317dbd96, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:15:11,538 DEBUG [RS:1;jenkins-hbase5:35687] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@30d80503, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:15:11,538 DEBUG [RS:0;jenkins-hbase5:43707] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@35423f61, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:15:11,538 DEBUG [RS:2;jenkins-hbase5:45347] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2077e1ec, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:15:11,538 DEBUG [RS:2;jenkins-hbase5:45347] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d310481, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:15:11,538 DEBUG [RS:1;jenkins-hbase5:35687] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c92b0df, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:15:11,547 DEBUG [RS:0;jenkins-hbase5:43707] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase5:43707 2023-07-21 08:15:11,547 INFO [RS:0;jenkins-hbase5:43707] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 08:15:11,547 INFO [RS:0;jenkins-hbase5:43707] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 08:15:11,547 DEBUG [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 08:15:11,548 INFO [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,40455,1689927310649 with isa=jenkins-hbase5.apache.org/172.31.10.131:43707, startcode=1689927310815 2023-07-21 08:15:11,548 DEBUG [RS:0;jenkins-hbase5:43707] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 08:15:11,548 DEBUG [RS:1;jenkins-hbase5:35687] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase5:35687 2023-07-21 08:15:11,548 INFO [RS:1;jenkins-hbase5:35687] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 08:15:11,549 INFO [RS:1;jenkins-hbase5:35687] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 08:15:11,549 DEBUG [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 08:15:11,549 INFO [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,40455,1689927310649 with isa=jenkins-hbase5.apache.org/172.31.10.131:35687, startcode=1689927311021 2023-07-21 08:15:11,549 DEBUG [RS:1;jenkins-hbase5:35687] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 08:15:11,549 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:37003, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 08:15:11,549 DEBUG [RS:2;jenkins-hbase5:45347] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase5:45347 2023-07-21 08:15:11,550 INFO [RS:2;jenkins-hbase5:45347] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 08:15:11,550 INFO [RS:2;jenkins-hbase5:45347] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 08:15:11,550 DEBUG [RS:2;jenkins-hbase5:45347] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 08:15:11,551 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40455] master.ServerManager(394): Registering regionserver=jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:11,551 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:15:11,552 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 08:15:11,552 DEBUG [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5 2023-07-21 08:15:11,552 DEBUG [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43379 2023-07-21 08:15:11,552 DEBUG [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39289 2023-07-21 08:15:11,552 INFO [RS:2;jenkins-hbase5:45347] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,40455,1689927310649 with isa=jenkins-hbase5.apache.org/172.31.10.131:45347, startcode=1689927311169 2023-07-21 08:15:11,552 DEBUG [RS:2;jenkins-hbase5:45347] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 08:15:11,552 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:38547, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 08:15:11,553 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40455] master.ServerManager(394): Registering regionserver=jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:11,553 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:15:11,553 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 08:15:11,553 DEBUG [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5 2023-07-21 08:15:11,553 DEBUG [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43379 2023-07-21 08:15:11,553 DEBUG [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39289 2023-07-21 08:15:11,553 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:11,553 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:58945, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 08:15:11,554 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40455] master.ServerManager(394): Registering regionserver=jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:11,554 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:15:11,554 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 08:15:11,555 DEBUG [RS:2;jenkins-hbase5:45347] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5 2023-07-21 08:15:11,555 DEBUG [RS:2;jenkins-hbase5:45347] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43379 2023-07-21 08:15:11,555 DEBUG [RS:2;jenkins-hbase5:45347] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39289 2023-07-21 08:15:11,561 DEBUG [RS:0;jenkins-hbase5:43707] zookeeper.ZKUtil(162): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:11,561 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase5.apache.org,35687,1689927311021] 2023-07-21 08:15:11,561 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase5.apache.org,43707,1689927310815] 2023-07-21 08:15:11,561 WARN [RS:0;jenkins-hbase5:43707] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:15:11,561 INFO [RS:0;jenkins-hbase5:43707] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:15:11,561 DEBUG [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:11,562 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:11,562 DEBUG [RS:1;jenkins-hbase5:35687] zookeeper.ZKUtil(162): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:11,562 WARN [RS:1;jenkins-hbase5:35687] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:15:11,562 INFO [RS:1;jenkins-hbase5:35687] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:15:11,562 DEBUG [RS:2;jenkins-hbase5:45347] zookeeper.ZKUtil(162): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:11,563 WARN [RS:2;jenkins-hbase5:45347] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:15:11,563 DEBUG [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:11,563 INFO [RS:2;jenkins-hbase5:45347] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:15:11,565 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase5.apache.org,45347,1689927311169] 2023-07-21 08:15:11,566 DEBUG [RS:2;jenkins-hbase5:45347] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:11,575 DEBUG [RS:0;jenkins-hbase5:43707] zookeeper.ZKUtil(162): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:11,576 DEBUG [RS:0;jenkins-hbase5:43707] zookeeper.ZKUtil(162): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:11,576 DEBUG [RS:0;jenkins-hbase5:43707] zookeeper.ZKUtil(162): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:11,576 DEBUG [RS:1;jenkins-hbase5:35687] zookeeper.ZKUtil(162): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:11,576 DEBUG [RS:2;jenkins-hbase5:45347] zookeeper.ZKUtil(162): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:11,577 DEBUG [RS:1;jenkins-hbase5:35687] zookeeper.ZKUtil(162): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:11,577 DEBUG [RS:2;jenkins-hbase5:45347] zookeeper.ZKUtil(162): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:11,577 DEBUG [RS:1;jenkins-hbase5:35687] zookeeper.ZKUtil(162): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:11,577 DEBUG [RS:0;jenkins-hbase5:43707] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 08:15:11,577 DEBUG [RS:2;jenkins-hbase5:45347] zookeeper.ZKUtil(162): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:11,577 INFO [RS:0;jenkins-hbase5:43707] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 08:15:11,578 DEBUG [RS:1;jenkins-hbase5:35687] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 08:15:11,578 DEBUG [RS:2;jenkins-hbase5:45347] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 08:15:11,579 INFO [RS:2;jenkins-hbase5:45347] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 08:15:11,579 INFO [RS:1;jenkins-hbase5:35687] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 08:15:11,584 INFO [RS:0;jenkins-hbase5:43707] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 08:15:11,591 INFO [RS:1;jenkins-hbase5:35687] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 08:15:11,591 INFO [RS:2;jenkins-hbase5:45347] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 08:15:11,591 INFO [RS:0;jenkins-hbase5:43707] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 08:15:11,591 INFO [RS:0;jenkins-hbase5:43707] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,592 INFO [RS:1;jenkins-hbase5:35687] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 08:15:11,592 INFO [RS:1;jenkins-hbase5:35687] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,594 INFO [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 08:15:11,594 INFO [RS:2;jenkins-hbase5:45347] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 08:15:11,594 INFO [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 08:15:11,594 INFO [RS:2;jenkins-hbase5:45347] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,599 INFO [RS:2;jenkins-hbase5:45347] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 08:15:11,599 INFO [RS:1;jenkins-hbase5:35687] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,599 DEBUG [RS:1;jenkins-hbase5:35687] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,599 INFO [RS:0;jenkins-hbase5:43707] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,599 DEBUG [RS:1;jenkins-hbase5:35687] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,599 DEBUG [RS:0;jenkins-hbase5:43707] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,599 DEBUG [RS:1;jenkins-hbase5:35687] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,599 DEBUG [RS:0;jenkins-hbase5:43707] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,600 DEBUG [RS:0;jenkins-hbase5:43707] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,600 INFO [RS:2;jenkins-hbase5:45347] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,601 DEBUG [RS:0;jenkins-hbase5:43707] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,601 DEBUG [RS:2;jenkins-hbase5:45347] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,601 DEBUG [RS:0;jenkins-hbase5:43707] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,601 DEBUG [RS:2;jenkins-hbase5:45347] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,601 DEBUG [RS:1;jenkins-hbase5:35687] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:2;jenkins-hbase5:45347] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,601 DEBUG [RS:0;jenkins-hbase5:43707] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:15:11,602 DEBUG [RS:2;jenkins-hbase5:45347] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:1;jenkins-hbase5:35687] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:2;jenkins-hbase5:45347] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:0;jenkins-hbase5:43707] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:2;jenkins-hbase5:45347] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:15:11,602 DEBUG [RS:0;jenkins-hbase5:43707] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:1;jenkins-hbase5:35687] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:15:11,602 DEBUG [RS:0;jenkins-hbase5:43707] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:2;jenkins-hbase5:45347] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:0;jenkins-hbase5:43707] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:1;jenkins-hbase5:35687] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:2;jenkins-hbase5:45347] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:1;jenkins-hbase5:35687] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:2;jenkins-hbase5:45347] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:1;jenkins-hbase5:35687] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,602 DEBUG [RS:2;jenkins-hbase5:45347] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,603 DEBUG [RS:1;jenkins-hbase5:35687] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:11,610 INFO [RS:2;jenkins-hbase5:45347] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,610 INFO [RS:2;jenkins-hbase5:45347] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,610 INFO [RS:2;jenkins-hbase5:45347] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,612 INFO [RS:0;jenkins-hbase5:43707] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,612 INFO [RS:0;jenkins-hbase5:43707] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,612 INFO [RS:0;jenkins-hbase5:43707] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,612 INFO [RS:1;jenkins-hbase5:35687] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,613 INFO [RS:1;jenkins-hbase5:35687] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,613 INFO [RS:1;jenkins-hbase5:35687] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,628 INFO [RS:0;jenkins-hbase5:43707] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 08:15:11,628 INFO [RS:0;jenkins-hbase5:43707] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,43707,1689927310815-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,632 INFO [RS:2;jenkins-hbase5:45347] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 08:15:11,632 INFO [RS:1;jenkins-hbase5:35687] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 08:15:11,632 INFO [RS:1;jenkins-hbase5:35687] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,35687,1689927311021-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,632 INFO [RS:2;jenkins-hbase5:45347] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,45347,1689927311169-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,645 INFO [RS:0;jenkins-hbase5:43707] regionserver.Replication(203): jenkins-hbase5.apache.org,43707,1689927310815 started 2023-07-21 08:15:11,645 INFO [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(1637): Serving as jenkins-hbase5.apache.org,43707,1689927310815, RpcServer on jenkins-hbase5.apache.org/172.31.10.131:43707, sessionid=0x101f28f27810001 2023-07-21 08:15:11,645 DEBUG [RS:0;jenkins-hbase5:43707] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 08:15:11,645 DEBUG [RS:0;jenkins-hbase5:43707] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:11,645 DEBUG [RS:0;jenkins-hbase5:43707] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,43707,1689927310815' 2023-07-21 08:15:11,645 DEBUG [RS:0;jenkins-hbase5:43707] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 08:15:11,646 DEBUG [RS:0;jenkins-hbase5:43707] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 08:15:11,646 DEBUG [RS:0;jenkins-hbase5:43707] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 08:15:11,646 DEBUG [RS:0;jenkins-hbase5:43707] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 08:15:11,646 DEBUG [RS:0;jenkins-hbase5:43707] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:11,646 DEBUG [RS:0;jenkins-hbase5:43707] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,43707,1689927310815' 2023-07-21 08:15:11,646 DEBUG [RS:0;jenkins-hbase5:43707] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 08:15:11,646 INFO [RS:2;jenkins-hbase5:45347] regionserver.Replication(203): jenkins-hbase5.apache.org,45347,1689927311169 started 2023-07-21 08:15:11,646 INFO [RS:2;jenkins-hbase5:45347] regionserver.HRegionServer(1637): Serving as jenkins-hbase5.apache.org,45347,1689927311169, RpcServer on jenkins-hbase5.apache.org/172.31.10.131:45347, sessionid=0x101f28f27810003 2023-07-21 08:15:11,646 DEBUG [RS:2;jenkins-hbase5:45347] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 08:15:11,647 DEBUG [RS:2;jenkins-hbase5:45347] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:11,647 DEBUG [RS:0;jenkins-hbase5:43707] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 08:15:11,647 DEBUG [RS:2;jenkins-hbase5:45347] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,45347,1689927311169' 2023-07-21 08:15:11,647 DEBUG [RS:2;jenkins-hbase5:45347] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 08:15:11,647 DEBUG [RS:2;jenkins-hbase5:45347] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 08:15:11,647 DEBUG [RS:2;jenkins-hbase5:45347] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 08:15:11,647 DEBUG [RS:2;jenkins-hbase5:45347] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 08:15:11,648 DEBUG [RS:2;jenkins-hbase5:45347] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:11,648 DEBUG [RS:2;jenkins-hbase5:45347] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,45347,1689927311169' 2023-07-21 08:15:11,648 DEBUG [RS:2;jenkins-hbase5:45347] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 08:15:11,648 DEBUG [RS:2;jenkins-hbase5:45347] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 08:15:11,648 DEBUG [RS:2;jenkins-hbase5:45347] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 08:15:11,648 INFO [RS:2;jenkins-hbase5:45347] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 08:15:11,648 INFO [RS:2;jenkins-hbase5:45347] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 08:15:11,650 DEBUG [RS:0;jenkins-hbase5:43707] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 08:15:11,650 INFO [RS:1;jenkins-hbase5:35687] regionserver.Replication(203): jenkins-hbase5.apache.org,35687,1689927311021 started 2023-07-21 08:15:11,650 INFO [RS:0;jenkins-hbase5:43707] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 08:15:11,650 INFO [RS:0;jenkins-hbase5:43707] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 08:15:11,650 INFO [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(1637): Serving as jenkins-hbase5.apache.org,35687,1689927311021, RpcServer on jenkins-hbase5.apache.org/172.31.10.131:35687, sessionid=0x101f28f27810002 2023-07-21 08:15:11,650 DEBUG [RS:1;jenkins-hbase5:35687] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 08:15:11,650 DEBUG [RS:1;jenkins-hbase5:35687] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:11,650 DEBUG [RS:1;jenkins-hbase5:35687] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,35687,1689927311021' 2023-07-21 08:15:11,650 DEBUG [RS:1;jenkins-hbase5:35687] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 08:15:11,651 DEBUG [RS:1;jenkins-hbase5:35687] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 08:15:11,651 DEBUG [RS:1;jenkins-hbase5:35687] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 08:15:11,651 DEBUG [RS:1;jenkins-hbase5:35687] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 08:15:11,651 DEBUG [RS:1;jenkins-hbase5:35687] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:11,651 DEBUG [RS:1;jenkins-hbase5:35687] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,35687,1689927311021' 2023-07-21 08:15:11,651 DEBUG [RS:1;jenkins-hbase5:35687] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 08:15:11,651 DEBUG [RS:1;jenkins-hbase5:35687] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 08:15:11,652 DEBUG [RS:1;jenkins-hbase5:35687] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 08:15:11,652 INFO [RS:1;jenkins-hbase5:35687] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 08:15:11,652 INFO [RS:1;jenkins-hbase5:35687] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 08:15:11,686 DEBUG [jenkins-hbase5:40455] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 08:15:11,687 DEBUG [jenkins-hbase5:40455] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:15:11,687 DEBUG [jenkins-hbase5:40455] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:15:11,687 DEBUG [jenkins-hbase5:40455] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:15:11,687 DEBUG [jenkins-hbase5:40455] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:15:11,687 DEBUG [jenkins-hbase5:40455] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:15:11,688 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase5.apache.org,43707,1689927310815, state=OPENING 2023-07-21 08:15:11,689 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 08:15:11,691 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:11,692 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase5.apache.org,43707,1689927310815}] 2023-07-21 08:15:11,692 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 08:15:11,750 INFO [RS:2;jenkins-hbase5:45347] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C45347%2C1689927311169, suffix=, logDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,45347,1689927311169, archiveDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/oldWALs, maxLogs=32 2023-07-21 08:15:11,751 INFO [RS:0;jenkins-hbase5:43707] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C43707%2C1689927310815, suffix=, logDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,43707,1689927310815, archiveDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/oldWALs, maxLogs=32 2023-07-21 08:15:11,753 INFO [RS:1;jenkins-hbase5:35687] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C35687%2C1689927311021, suffix=, logDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,35687,1689927311021, archiveDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/oldWALs, maxLogs=32 2023-07-21 08:15:11,768 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41537,DS-1ab313a6-3496-4477-9be5-4c4e547c782f,DISK] 2023-07-21 08:15:11,768 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39077,DS-c33db604-e339-4e74-b00b-ff5e751d21dc,DISK] 2023-07-21 08:15:11,768 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33961,DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8,DISK] 2023-07-21 08:15:11,773 INFO [RS:2;jenkins-hbase5:45347] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,45347,1689927311169/jenkins-hbase5.apache.org%2C45347%2C1689927311169.1689927311750 2023-07-21 08:15:11,774 DEBUG [RS:2;jenkins-hbase5:45347] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39077,DS-c33db604-e339-4e74-b00b-ff5e751d21dc,DISK], DatanodeInfoWithStorage[127.0.0.1:41537,DS-1ab313a6-3496-4477-9be5-4c4e547c782f,DISK], DatanodeInfoWithStorage[127.0.0.1:33961,DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8,DISK]] 2023-07-21 08:15:11,774 WARN [ReadOnlyZKClient-127.0.0.1:59078@0x46b5b40c] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 08:15:11,775 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,40455,1689927310649] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:15:11,782 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41537,DS-1ab313a6-3496-4477-9be5-4c4e547c782f,DISK] 2023-07-21 08:15:11,782 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:47604, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:15:11,782 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41537,DS-1ab313a6-3496-4477-9be5-4c4e547c782f,DISK] 2023-07-21 08:15:11,782 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39077,DS-c33db604-e339-4e74-b00b-ff5e751d21dc,DISK] 2023-07-21 08:15:11,783 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43707] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.10.131:47604 deadline: 1689927371782, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:11,788 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33961,DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8,DISK] 2023-07-21 08:15:11,789 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39077,DS-c33db604-e339-4e74-b00b-ff5e751d21dc,DISK] 2023-07-21 08:15:11,789 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33961,DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8,DISK] 2023-07-21 08:15:11,793 INFO [RS:1;jenkins-hbase5:35687] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,35687,1689927311021/jenkins-hbase5.apache.org%2C35687%2C1689927311021.1689927311754 2023-07-21 08:15:11,794 INFO [RS:0;jenkins-hbase5:43707] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,43707,1689927310815/jenkins-hbase5.apache.org%2C43707%2C1689927310815.1689927311752 2023-07-21 08:15:11,794 DEBUG [RS:1;jenkins-hbase5:35687] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39077,DS-c33db604-e339-4e74-b00b-ff5e751d21dc,DISK], DatanodeInfoWithStorage[127.0.0.1:33961,DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8,DISK], DatanodeInfoWithStorage[127.0.0.1:41537,DS-1ab313a6-3496-4477-9be5-4c4e547c782f,DISK]] 2023-07-21 08:15:11,796 DEBUG [RS:0;jenkins-hbase5:43707] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41537,DS-1ab313a6-3496-4477-9be5-4c4e547c782f,DISK], DatanodeInfoWithStorage[127.0.0.1:33961,DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8,DISK], DatanodeInfoWithStorage[127.0.0.1:39077,DS-c33db604-e339-4e74-b00b-ff5e751d21dc,DISK]] 2023-07-21 08:15:11,847 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:11,849 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:15:11,851 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:47608, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:15:11,854 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 08:15:11,854 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:15:11,856 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C43707%2C1689927310815.meta, suffix=.meta, logDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,43707,1689927310815, archiveDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/oldWALs, maxLogs=32 2023-07-21 08:15:11,870 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39077,DS-c33db604-e339-4e74-b00b-ff5e751d21dc,DISK] 2023-07-21 08:15:11,870 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41537,DS-1ab313a6-3496-4477-9be5-4c4e547c782f,DISK] 2023-07-21 08:15:11,870 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33961,DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8,DISK] 2023-07-21 08:15:11,872 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,43707,1689927310815/jenkins-hbase5.apache.org%2C43707%2C1689927310815.meta.1689927311856.meta 2023-07-21 08:15:11,873 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39077,DS-c33db604-e339-4e74-b00b-ff5e751d21dc,DISK], DatanodeInfoWithStorage[127.0.0.1:41537,DS-1ab313a6-3496-4477-9be5-4c4e547c782f,DISK], DatanodeInfoWithStorage[127.0.0.1:33961,DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8,DISK]] 2023-07-21 08:15:11,873 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:15:11,873 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 08:15:11,873 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 08:15:11,873 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 08:15:11,873 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 08:15:11,873 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:11,873 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 08:15:11,873 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 08:15:11,874 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 08:15:11,875 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/info 2023-07-21 08:15:11,875 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/info 2023-07-21 08:15:11,876 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 08:15:11,876 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:11,876 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 08:15:11,877 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/rep_barrier 2023-07-21 08:15:11,877 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/rep_barrier 2023-07-21 08:15:11,878 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 08:15:11,878 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:11,878 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 08:15:11,879 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/table 2023-07-21 08:15:11,879 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/table 2023-07-21 08:15:11,879 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 08:15:11,880 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:11,880 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740 2023-07-21 08:15:11,881 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740 2023-07-21 08:15:11,883 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 08:15:11,884 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 08:15:11,885 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10013755520, jitterRate=-0.06739634275436401}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 08:15:11,885 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 08:15:11,888 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689927311847 2023-07-21 08:15:11,893 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 08:15:11,893 INFO [RS_OPEN_META-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 08:15:11,894 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase5.apache.org,43707,1689927310815, state=OPEN 2023-07-21 08:15:11,895 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 08:15:11,895 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 08:15:11,897 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 08:15:11,897 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase5.apache.org,43707,1689927310815 in 203 msec 2023-07-21 08:15:11,899 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 08:15:11,899 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 363 msec 2023-07-21 08:15:11,900 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 439 msec 2023-07-21 08:15:11,900 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689927311900, completionTime=-1 2023-07-21 08:15:11,900 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 08:15:11,900 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 08:15:11,904 INFO [master/jenkins-hbase5:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 08:15:11,904 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689927371904 2023-07-21 08:15:11,904 INFO [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689927431904 2023-07-21 08:15:11,904 INFO [master/jenkins-hbase5:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-21 08:15:11,912 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,40455,1689927310649-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,912 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,40455,1689927310649-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,912 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,40455,1689927310649-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,912 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase5:40455, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,912 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:11,913 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 08:15:11,913 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:11,914 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 08:15:11,915 DEBUG [master/jenkins-hbase5:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 08:15:11,915 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:15:11,916 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:15:11,918 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/hbase/namespace/197ea5a80778b8c2adce4be318829b31 2023-07-21 08:15:11,918 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/hbase/namespace/197ea5a80778b8c2adce4be318829b31 empty. 2023-07-21 08:15:11,919 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/hbase/namespace/197ea5a80778b8c2adce4be318829b31 2023-07-21 08:15:11,919 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 08:15:11,933 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 08:15:11,934 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 197ea5a80778b8c2adce4be318829b31, NAME => 'hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp 2023-07-21 08:15:11,942 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:11,943 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 197ea5a80778b8c2adce4be318829b31, disabling compactions & flushes 2023-07-21 08:15:11,943 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. 2023-07-21 08:15:11,943 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. 2023-07-21 08:15:11,943 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. after waiting 0 ms 2023-07-21 08:15:11,943 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. 2023-07-21 08:15:11,943 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. 2023-07-21 08:15:11,943 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 197ea5a80778b8c2adce4be318829b31: 2023-07-21 08:15:11,945 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:15:11,946 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689927311946"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927311946"}]},"ts":"1689927311946"} 2023-07-21 08:15:11,948 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:15:11,949 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:15:11,949 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927311949"}]},"ts":"1689927311949"} 2023-07-21 08:15:11,950 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 08:15:11,954 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:15:11,954 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:15:11,954 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:15:11,954 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:15:11,954 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:15:11,954 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=197ea5a80778b8c2adce4be318829b31, ASSIGN}] 2023-07-21 08:15:11,956 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=197ea5a80778b8c2adce4be318829b31, ASSIGN 2023-07-21 08:15:11,957 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=197ea5a80778b8c2adce4be318829b31, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,35687,1689927311021; forceNewPlan=false, retain=false 2023-07-21 08:15:12,086 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,40455,1689927310649] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:12,088 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,40455,1689927310649] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 08:15:12,090 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:15:12,090 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:15:12,092 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/hbase/rsgroup/d956cbe28a1cb70c40a58098938f8144 2023-07-21 08:15:12,093 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/hbase/rsgroup/d956cbe28a1cb70c40a58098938f8144 empty. 2023-07-21 08:15:12,093 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/hbase/rsgroup/d956cbe28a1cb70c40a58098938f8144 2023-07-21 08:15:12,093 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 08:15:12,105 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 08:15:12,107 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => d956cbe28a1cb70c40a58098938f8144, NAME => 'hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp 2023-07-21 08:15:12,107 INFO [jenkins-hbase5:40455] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 08:15:12,109 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=197ea5a80778b8c2adce4be318829b31, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:12,109 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689927312109"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927312109"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927312109"}]},"ts":"1689927312109"} 2023-07-21 08:15:12,110 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 197ea5a80778b8c2adce4be318829b31, server=jenkins-hbase5.apache.org,35687,1689927311021}] 2023-07-21 08:15:12,117 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:12,117 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing d956cbe28a1cb70c40a58098938f8144, disabling compactions & flushes 2023-07-21 08:15:12,117 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. 2023-07-21 08:15:12,117 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. 2023-07-21 08:15:12,117 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. after waiting 0 ms 2023-07-21 08:15:12,117 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. 2023-07-21 08:15:12,117 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. 2023-07-21 08:15:12,117 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for d956cbe28a1cb70c40a58098938f8144: 2023-07-21 08:15:12,119 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:15:12,120 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689927312119"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927312119"}]},"ts":"1689927312119"} 2023-07-21 08:15:12,121 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:15:12,121 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:15:12,121 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927312121"}]},"ts":"1689927312121"} 2023-07-21 08:15:12,122 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 08:15:12,125 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:15:12,125 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:15:12,125 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:15:12,125 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:15:12,125 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:15:12,125 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=d956cbe28a1cb70c40a58098938f8144, ASSIGN}] 2023-07-21 08:15:12,126 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=d956cbe28a1cb70c40a58098938f8144, ASSIGN 2023-07-21 08:15:12,126 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=d956cbe28a1cb70c40a58098938f8144, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,43707,1689927310815; forceNewPlan=false, retain=false 2023-07-21 08:15:12,230 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-21 08:15:12,261 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:12,262 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:15:12,265 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:54448, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:15:12,269 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. 2023-07-21 08:15:12,269 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 197ea5a80778b8c2adce4be318829b31, NAME => 'hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:15:12,270 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 197ea5a80778b8c2adce4be318829b31 2023-07-21 08:15:12,270 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:12,270 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for 197ea5a80778b8c2adce4be318829b31 2023-07-21 08:15:12,270 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for 197ea5a80778b8c2adce4be318829b31 2023-07-21 08:15:12,272 INFO [StoreOpener-197ea5a80778b8c2adce4be318829b31-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 197ea5a80778b8c2adce4be318829b31 2023-07-21 08:15:12,275 DEBUG [StoreOpener-197ea5a80778b8c2adce4be318829b31-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/namespace/197ea5a80778b8c2adce4be318829b31/info 2023-07-21 08:15:12,275 DEBUG [StoreOpener-197ea5a80778b8c2adce4be318829b31-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/namespace/197ea5a80778b8c2adce4be318829b31/info 2023-07-21 08:15:12,276 INFO [StoreOpener-197ea5a80778b8c2adce4be318829b31-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 197ea5a80778b8c2adce4be318829b31 columnFamilyName info 2023-07-21 08:15:12,276 INFO [StoreOpener-197ea5a80778b8c2adce4be318829b31-1] regionserver.HStore(310): Store=197ea5a80778b8c2adce4be318829b31/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:12,277 INFO [jenkins-hbase5:40455] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 08:15:12,277 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=d956cbe28a1cb70c40a58098938f8144, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:12,278 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689927312277"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927312277"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927312277"}]},"ts":"1689927312277"} 2023-07-21 08:15:12,278 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/namespace/197ea5a80778b8c2adce4be318829b31 2023-07-21 08:15:12,278 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/namespace/197ea5a80778b8c2adce4be318829b31 2023-07-21 08:15:12,280 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure d956cbe28a1cb70c40a58098938f8144, server=jenkins-hbase5.apache.org,43707,1689927310815}] 2023-07-21 08:15:12,284 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for 197ea5a80778b8c2adce4be318829b31 2023-07-21 08:15:12,292 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/namespace/197ea5a80778b8c2adce4be318829b31/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:12,293 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened 197ea5a80778b8c2adce4be318829b31; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11391480960, jitterRate=0.06091433763504028}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:12,293 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for 197ea5a80778b8c2adce4be318829b31: 2023-07-21 08:15:12,295 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31., pid=7, masterSystemTime=1689927312261 2023-07-21 08:15:12,299 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=197ea5a80778b8c2adce4be318829b31, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:12,299 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. 2023-07-21 08:15:12,300 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. 2023-07-21 08:15:12,300 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689927312299"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927312299"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927312299"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927312299"}]},"ts":"1689927312299"} 2023-07-21 08:15:12,303 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-21 08:15:12,303 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 197ea5a80778b8c2adce4be318829b31, server=jenkins-hbase5.apache.org,35687,1689927311021 in 191 msec 2023-07-21 08:15:12,305 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-21 08:15:12,305 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=197ea5a80778b8c2adce4be318829b31, ASSIGN in 349 msec 2023-07-21 08:15:12,305 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:15:12,305 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927312305"}]},"ts":"1689927312305"} 2023-07-21 08:15:12,306 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 08:15:12,310 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:15:12,311 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 397 msec 2023-07-21 08:15:12,315 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 08:15:12,317 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 08:15:12,317 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:12,330 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:15:12,332 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:54452, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:15:12,334 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 08:15:12,342 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 08:15:12,344 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-21 08:15:12,355 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 08:15:12,360 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-21 08:15:12,360 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 08:15:12,436 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. 2023-07-21 08:15:12,436 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d956cbe28a1cb70c40a58098938f8144, NAME => 'hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:15:12,436 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 08:15:12,437 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. service=MultiRowMutationService 2023-07-21 08:15:12,437 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 08:15:12,437 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup d956cbe28a1cb70c40a58098938f8144 2023-07-21 08:15:12,437 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:12,437 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for d956cbe28a1cb70c40a58098938f8144 2023-07-21 08:15:12,437 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for d956cbe28a1cb70c40a58098938f8144 2023-07-21 08:15:12,438 INFO [StoreOpener-d956cbe28a1cb70c40a58098938f8144-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region d956cbe28a1cb70c40a58098938f8144 2023-07-21 08:15:12,439 DEBUG [StoreOpener-d956cbe28a1cb70c40a58098938f8144-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/rsgroup/d956cbe28a1cb70c40a58098938f8144/m 2023-07-21 08:15:12,439 DEBUG [StoreOpener-d956cbe28a1cb70c40a58098938f8144-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/rsgroup/d956cbe28a1cb70c40a58098938f8144/m 2023-07-21 08:15:12,440 INFO [StoreOpener-d956cbe28a1cb70c40a58098938f8144-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d956cbe28a1cb70c40a58098938f8144 columnFamilyName m 2023-07-21 08:15:12,440 INFO [StoreOpener-d956cbe28a1cb70c40a58098938f8144-1] regionserver.HStore(310): Store=d956cbe28a1cb70c40a58098938f8144/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:12,441 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/rsgroup/d956cbe28a1cb70c40a58098938f8144 2023-07-21 08:15:12,441 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/rsgroup/d956cbe28a1cb70c40a58098938f8144 2023-07-21 08:15:12,444 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for d956cbe28a1cb70c40a58098938f8144 2023-07-21 08:15:12,446 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/rsgroup/d956cbe28a1cb70c40a58098938f8144/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:12,446 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened d956cbe28a1cb70c40a58098938f8144; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@36f8e5, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:12,446 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for d956cbe28a1cb70c40a58098938f8144: 2023-07-21 08:15:12,447 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144., pid=9, masterSystemTime=1689927312432 2023-07-21 08:15:12,448 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. 2023-07-21 08:15:12,448 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. 2023-07-21 08:15:12,449 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=d956cbe28a1cb70c40a58098938f8144, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:12,449 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689927312449"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927312449"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927312449"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927312449"}]},"ts":"1689927312449"} 2023-07-21 08:15:12,453 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-21 08:15:12,453 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure d956cbe28a1cb70c40a58098938f8144, server=jenkins-hbase5.apache.org,43707,1689927310815 in 170 msec 2023-07-21 08:15:12,454 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-21 08:15:12,454 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=d956cbe28a1cb70c40a58098938f8144, ASSIGN in 328 msec 2023-07-21 08:15:12,464 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 08:15:12,467 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 111 msec 2023-07-21 08:15:12,467 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:15:12,467 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927312467"}]},"ts":"1689927312467"} 2023-07-21 08:15:12,469 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 08:15:12,473 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 08:15:12,476 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:15:12,477 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 08:15:12,477 INFO [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.155sec 2023-07-21 08:15:12,478 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 390 msec 2023-07-21 08:15:12,480 INFO [master/jenkins-hbase5:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 08:15:12,480 INFO [master/jenkins-hbase5:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 08:15:12,480 INFO [master/jenkins-hbase5:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 08:15:12,480 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,40455,1689927310649-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 08:15:12,480 INFO [master/jenkins-hbase5:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,40455,1689927310649-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 08:15:12,480 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 08:15:12,491 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 08:15:12,491 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 08:15:12,495 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:12,495 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:12,499 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 08:15:12,501 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 08:15:12,511 DEBUG [Listener at localhost/43371] zookeeper.ReadOnlyZKClient(139): Connect 0x22cc8e8f to 127.0.0.1:59078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:15:12,516 DEBUG [Listener at localhost/43371] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4981e3b1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:15:12,519 DEBUG [hconnection-0x5af53e15-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:15:12,521 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:47622, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:15:12,522 INFO [Listener at localhost/43371] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase5.apache.org,40455,1689927310649 2023-07-21 08:15:12,522 INFO [Listener at localhost/43371] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:12,524 DEBUG [Listener at localhost/43371] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 08:15:12,526 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:51500, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 08:15:12,529 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 08:15:12,529 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:12,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(492): Client=jenkins//172.31.10.131 set balanceSwitch=false 2023-07-21 08:15:12,530 DEBUG [Listener at localhost/43371] zookeeper.ReadOnlyZKClient(139): Connect 0x72c3dca2 to 127.0.0.1:59078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:15:12,536 DEBUG [Listener at localhost/43371] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b54179, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:15:12,536 INFO [Listener at localhost/43371] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59078 2023-07-21 08:15:12,543 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:15:12,543 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101f28f2781000a connected 2023-07-21 08:15:12,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:12,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:12,549 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 08:15:12,560 INFO [Listener at localhost/43371] client.ConnectionUtils(127): regionserver/jenkins-hbase5:0 server-side Connection retries=45 2023-07-21 08:15:12,561 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:12,561 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:12,561 INFO [Listener at localhost/43371] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 08:15:12,561 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 08:15:12,561 INFO [Listener at localhost/43371] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 08:15:12,561 INFO [Listener at localhost/43371] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 08:15:12,562 INFO [Listener at localhost/43371] ipc.NettyRpcServer(120): Bind to /172.31.10.131:42375 2023-07-21 08:15:12,562 INFO [Listener at localhost/43371] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 08:15:12,563 DEBUG [Listener at localhost/43371] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 08:15:12,564 INFO [Listener at localhost/43371] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:12,564 INFO [Listener at localhost/43371] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 08:15:12,565 INFO [Listener at localhost/43371] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42375 connecting to ZooKeeper ensemble=127.0.0.1:59078 2023-07-21 08:15:12,570 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:423750x0, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 08:15:12,571 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(162): regionserver:423750x0, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 08:15:12,572 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42375-0x101f28f2781000b connected 2023-07-21 08:15:12,572 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(162): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 08:15:12,573 DEBUG [Listener at localhost/43371] zookeeper.ZKUtil(164): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 08:15:12,573 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42375 2023-07-21 08:15:12,574 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42375 2023-07-21 08:15:12,574 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42375 2023-07-21 08:15:12,578 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42375 2023-07-21 08:15:12,579 DEBUG [Listener at localhost/43371] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42375 2023-07-21 08:15:12,581 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 08:15:12,581 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 08:15:12,581 INFO [Listener at localhost/43371] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 08:15:12,582 INFO [Listener at localhost/43371] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 08:15:12,582 INFO [Listener at localhost/43371] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 08:15:12,582 INFO [Listener at localhost/43371] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 08:15:12,582 INFO [Listener at localhost/43371] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 08:15:12,583 INFO [Listener at localhost/43371] http.HttpServer(1146): Jetty bound to port 39735 2023-07-21 08:15:12,583 INFO [Listener at localhost/43371] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 08:15:12,586 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:12,587 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2014f237{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/hadoop.log.dir/,AVAILABLE} 2023-07-21 08:15:12,587 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:12,587 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41a216b5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 08:15:12,705 INFO [Listener at localhost/43371] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 08:15:12,705 INFO [Listener at localhost/43371] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 08:15:12,705 INFO [Listener at localhost/43371] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 08:15:12,706 INFO [Listener at localhost/43371] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 08:15:12,706 INFO [Listener at localhost/43371] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 08:15:12,707 INFO [Listener at localhost/43371] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7db6a9e1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/java.io.tmpdir/jetty-0_0_0_0-39735-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4416232502599157112/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:12,709 INFO [Listener at localhost/43371] server.AbstractConnector(333): Started ServerConnector@5cae68bf{HTTP/1.1, (http/1.1)}{0.0.0.0:39735} 2023-07-21 08:15:12,709 INFO [Listener at localhost/43371] server.Server(415): Started @42921ms 2023-07-21 08:15:12,711 INFO [RS:3;jenkins-hbase5:42375] regionserver.HRegionServer(951): ClusterId : da7dbb03-33c5-4538-882e-3cf363969d3d 2023-07-21 08:15:12,711 DEBUG [RS:3;jenkins-hbase5:42375] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 08:15:12,713 DEBUG [RS:3;jenkins-hbase5:42375] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 08:15:12,713 DEBUG [RS:3;jenkins-hbase5:42375] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 08:15:12,715 DEBUG [RS:3;jenkins-hbase5:42375] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 08:15:12,718 DEBUG [RS:3;jenkins-hbase5:42375] zookeeper.ReadOnlyZKClient(139): Connect 0x3d513fa7 to 127.0.0.1:59078 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 08:15:12,722 DEBUG [RS:3;jenkins-hbase5:42375] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3396c67d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 08:15:12,722 DEBUG [RS:3;jenkins-hbase5:42375] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37a954d7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:15:12,732 DEBUG [RS:3;jenkins-hbase5:42375] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase5:42375 2023-07-21 08:15:12,732 INFO [RS:3;jenkins-hbase5:42375] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 08:15:12,732 INFO [RS:3;jenkins-hbase5:42375] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 08:15:12,732 DEBUG [RS:3;jenkins-hbase5:42375] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 08:15:12,732 INFO [RS:3;jenkins-hbase5:42375] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase5.apache.org,40455,1689927310649 with isa=jenkins-hbase5.apache.org/172.31.10.131:42375, startcode=1689927312560 2023-07-21 08:15:12,732 DEBUG [RS:3;jenkins-hbase5:42375] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 08:15:12,735 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:38441, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 08:15:12,735 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40455] master.ServerManager(394): Registering regionserver=jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:12,735 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 08:15:12,735 DEBUG [RS:3;jenkins-hbase5:42375] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5 2023-07-21 08:15:12,735 DEBUG [RS:3;jenkins-hbase5:42375] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43379 2023-07-21 08:15:12,735 DEBUG [RS:3;jenkins-hbase5:42375] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39289 2023-07-21 08:15:12,742 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:12,742 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:12,742 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:12,742 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:12,742 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:12,742 DEBUG [RS:3;jenkins-hbase5:42375] zookeeper.ZKUtil(162): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:12,742 WARN [RS:3;jenkins-hbase5:42375] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 08:15:12,742 INFO [RS:3;jenkins-hbase5:42375] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 08:15:12,742 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 08:15:12,742 DEBUG [RS:3;jenkins-hbase5:42375] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:12,742 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase5.apache.org,42375,1689927312560] 2023-07-21 08:15:12,742 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:12,743 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:12,743 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:12,744 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,40455,1689927310649] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 08:15:12,744 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:12,744 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:12,744 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:12,745 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:12,745 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:12,745 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:12,746 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:12,746 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:12,746 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:12,747 DEBUG [RS:3;jenkins-hbase5:42375] zookeeper.ZKUtil(162): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:12,747 DEBUG [RS:3;jenkins-hbase5:42375] zookeeper.ZKUtil(162): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:12,747 DEBUG [RS:3;jenkins-hbase5:42375] zookeeper.ZKUtil(162): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:12,747 DEBUG [RS:3;jenkins-hbase5:42375] zookeeper.ZKUtil(162): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:12,748 DEBUG [RS:3;jenkins-hbase5:42375] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 08:15:12,748 INFO [RS:3;jenkins-hbase5:42375] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 08:15:12,749 INFO [RS:3;jenkins-hbase5:42375] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 08:15:12,750 INFO [RS:3;jenkins-hbase5:42375] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 08:15:12,750 INFO [RS:3;jenkins-hbase5:42375] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:12,750 INFO [RS:3;jenkins-hbase5:42375] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 08:15:12,751 INFO [RS:3;jenkins-hbase5:42375] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:12,752 DEBUG [RS:3;jenkins-hbase5:42375] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:12,752 DEBUG [RS:3;jenkins-hbase5:42375] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:12,752 DEBUG [RS:3;jenkins-hbase5:42375] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:12,752 DEBUG [RS:3;jenkins-hbase5:42375] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:12,752 DEBUG [RS:3;jenkins-hbase5:42375] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:12,752 DEBUG [RS:3;jenkins-hbase5:42375] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase5:0, corePoolSize=2, maxPoolSize=2 2023-07-21 08:15:12,752 DEBUG [RS:3;jenkins-hbase5:42375] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:12,752 DEBUG [RS:3;jenkins-hbase5:42375] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:12,752 DEBUG [RS:3;jenkins-hbase5:42375] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:12,752 DEBUG [RS:3;jenkins-hbase5:42375] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase5:0, corePoolSize=1, maxPoolSize=1 2023-07-21 08:15:12,756 INFO [RS:3;jenkins-hbase5:42375] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:12,756 INFO [RS:3;jenkins-hbase5:42375] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:12,756 INFO [RS:3;jenkins-hbase5:42375] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:12,768 INFO [RS:3;jenkins-hbase5:42375] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 08:15:12,768 INFO [RS:3;jenkins-hbase5:42375] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase5.apache.org,42375,1689927312560-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 08:15:12,778 INFO [RS:3;jenkins-hbase5:42375] regionserver.Replication(203): jenkins-hbase5.apache.org,42375,1689927312560 started 2023-07-21 08:15:12,778 INFO [RS:3;jenkins-hbase5:42375] regionserver.HRegionServer(1637): Serving as jenkins-hbase5.apache.org,42375,1689927312560, RpcServer on jenkins-hbase5.apache.org/172.31.10.131:42375, sessionid=0x101f28f2781000b 2023-07-21 08:15:12,778 DEBUG [RS:3;jenkins-hbase5:42375] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 08:15:12,778 DEBUG [RS:3;jenkins-hbase5:42375] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:12,778 DEBUG [RS:3;jenkins-hbase5:42375] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,42375,1689927312560' 2023-07-21 08:15:12,778 DEBUG [RS:3;jenkins-hbase5:42375] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 08:15:12,778 DEBUG [RS:3;jenkins-hbase5:42375] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 08:15:12,779 DEBUG [RS:3;jenkins-hbase5:42375] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 08:15:12,779 DEBUG [RS:3;jenkins-hbase5:42375] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 08:15:12,779 DEBUG [RS:3;jenkins-hbase5:42375] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:12,779 DEBUG [RS:3;jenkins-hbase5:42375] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase5.apache.org,42375,1689927312560' 2023-07-21 08:15:12,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:12,779 DEBUG [RS:3;jenkins-hbase5:42375] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 08:15:12,779 DEBUG [RS:3;jenkins-hbase5:42375] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 08:15:12,780 DEBUG [RS:3;jenkins-hbase5:42375] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 08:15:12,780 INFO [RS:3;jenkins-hbase5:42375] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 08:15:12,780 INFO [RS:3;jenkins-hbase5:42375] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 08:15:12,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:12,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:12,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:12,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:12,786 DEBUG [hconnection-0x4875d50c-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 08:15:12,789 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:47624, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 08:15:12,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:12,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:12,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40455] to rsgroup master 2023-07-21 08:15:12,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:12,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.10.131:51500 deadline: 1689928512797, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. 2023-07-21 08:15:12,798 WARN [Listener at localhost/43371] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:12,799 INFO [Listener at localhost/43371] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:12,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:12,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:12,800 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:35687, jenkins-hbase5.apache.org:42375, jenkins-hbase5.apache.org:43707, jenkins-hbase5.apache.org:45347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:12,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:12,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:12,852 INFO [Listener at localhost/43371] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=564 (was 516) Potentially hanging thread: IPC Server handler 1 on default port 43379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 0 on default port 33019 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:41921 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5-prefix:jenkins-hbase5.apache.org,45347,1689927311169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x22cc8e8f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/679935031.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x102d08ba-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43707 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp729762210-2223-acceptor-0@3198408c-ServerConnector@2c973153{HTTP/1.1, (http/1.1)}{0.0.0.0:39289} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1346711658@qtp-2007715833-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp729762210-2227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1860723130-2317 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase5:42375 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp301661648-2259 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x46b5b40c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-5b2bdf38-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase5:43707-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x46b5b40c-SendThread(127.0.0.1:59078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp301661648-2258 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1401426245-2599 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x22cc8e8f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp23771877-2289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@22bbfe23 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase5:35687-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.small.0-1689927311483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 177493747@qtp-117544939-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45681 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59333@0x277c98ff-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: BP-1446181821-172.31.10.131-1689927309871 heartbeating to localhost/127.0.0.1:43379 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 34973 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x4875d50c-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@7e8e97c java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,43777,1689927305397 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x6d351ff9-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43371-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (732687102) connection to localhost/127.0.0.1:43379 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3dd69785 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1933953044_17 at /127.0.0.1:41668 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1366183372@qtp-1990421390-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59333@0x277c98ff sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/679935031.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44391-SendThread(127.0.0.1:59333) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-1446181821-172.31.10.131-1689927309871 heartbeating to localhost/127.0.0.1:43379 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1933953044_17 at /127.0.0.1:57894 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 34973 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 3 on default port 43379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp729762210-2222 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp301661648-2253 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4875d50c-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase5:35687Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x6c970812-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost/43371-SendThread(127.0.0.1:59078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 2 on default port 43379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5-prefix:jenkins-hbase5.apache.org,43707,1689927310815 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@7d2c2bf4[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@59dc3253[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 33019 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=45347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=45347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 34973 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data6/current/BP-1446181821-172.31.10.131-1689927309871 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1993089592@qtp-556789347-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35797 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Session-HouseKeeper-13d4b9e9-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1860723130-2320 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (732687102) connection to localhost/127.0.0.1:43379 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43371.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x102d08ba-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: jenkins-hbase5:43707Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@23df4086 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6591b10a-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase5:42375-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@6f41ab3a java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp729762210-2229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.large.0-1689927311483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RS:2;jenkins-hbase5:45347-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:41921 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43707 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2122466805_17 at /127.0.0.1:57884 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (732687102) connection to localhost/127.0.0.1:41921 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x3fba5411 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/679935031.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43371-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x102d08ba-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1006839180-2327 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 43379 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x3fba5411-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1006839180-2331 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:41921 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp729762210-2224 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase5:43707 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data3/current/BP-1446181821-172.31.10.131-1689927309871 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43707 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1933953044_17 at /127.0.0.1:57918 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp23771877-2290 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x102d08ba-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x3fba5411-SendThread(127.0.0.1:59078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1933953044_17 at /127.0.0.1:57846 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43371-SendThread(127.0.0.1:59078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x6c970812 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/679935031.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1446181821-172.31.10.131-1689927309871 heartbeating to localhost/127.0.0.1:43379 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x22cc8e8f-SendThread(127.0.0.1:59078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=45347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43707 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43371-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1401426245-2592 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2146466660_17 at /127.0.0.1:41658 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp301661648-2260 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase5:45347 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@121114dd java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x46b5b40c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/679935031.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43371-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1006839180-2324 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 43371 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2122466805_17 at /127.0.0.1:43542 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1887831842_17 at /127.0.0.1:57864 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (732687102) connection to localhost/127.0.0.1:43379 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/43371 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (732687102) connection to localhost/127.0.0.1:43379 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x102d08ba-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data4/current/BP-1446181821-172.31.10.131-1689927309871 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43371-SendThread(127.0.0.1:59078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp729762210-2225 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1401426245-2596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=45347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1401426245-2594 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp301661648-2257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44391-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1006839180-2330 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1006839180-2328-acceptor-0@44744a0e-ServerConnector@32c4cc7c{HTTP/1.1, (http/1.1)}{0.0.0.0:33967} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 33019 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:41921 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 632855631@qtp-117544939-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: jenkins-hbase5:40455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: M:0;jenkins-hbase5:40455 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x102d08ba-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 43371 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43707 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@7d808172 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=45347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1860723130-2316 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:43379 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1006839180-2329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x6d351ff9-SendThread(127.0.0.1:59078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/43371.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Client (732687102) connection to localhost/127.0.0.1:41921 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1860723130-2314-acceptor-0@658e79b3-ServerConnector@390d766a{HTTP/1.1, (http/1.1)}{0.0.0.0:44255} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59333@0x277c98ff-SendThread(127.0.0.1:59333) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: IPC Server handler 3 on default port 34973 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data5/current/BP-1446181821-172.31.10.131-1689927309871 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1401426245-2598 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(93576150) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2146466660_17 at /127.0.0.1:43572 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/43371-SendThread(127.0.0.1:59078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43707 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43707 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 33019 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 3 on default port 33019 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server idle connection scanner for port 43371 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (732687102) connection to localhost/127.0.0.1:41921 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data1/current/BP-1446181821-172.31.10.131-1689927309871 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 43371 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp23771877-2284-acceptor-0@2bba14eb-ServerConnector@4e2832c6{HTTP/1.1, (http/1.1)}{0.0.0.0:36617} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1887831842_17 at /127.0.0.1:43522 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x72c3dca2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/679935031.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3de5d745-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3b12fadf java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1860723130-2313 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase5:45347Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp23771877-2283 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1401426245-2595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1401426245-2593-acceptor-0@7a264041-ServerConnector@5cae68bf{HTTP/1.1, (http/1.1)}{0.0.0.0:39735} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (732687102) connection to localhost/127.0.0.1:43379 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1933953044_17 at /127.0.0.1:41662 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x3d513fa7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/679935031.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase5:42375Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1006839180-2326 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:43379 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1398490559@qtp-1990421390-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36011 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:43379 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp23771877-2286 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 43371 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:59078 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data2/current/BP-1446181821-172.31.10.131-1689927309871 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x3d513fa7-SendThread(127.0.0.1:59078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/43371-SendThread(127.0.0.1:59078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@3243ee22 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 33019 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@22267c3c java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x102d08ba-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5af53e15-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1860723130-2315 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp23771877-2285 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1887831842_17 at /127.0.0.1:41626 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x6c970812-SendThread(127.0.0.1:59078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp301661648-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@53fe02dd java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase5:35687 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (732687102) connection to localhost/127.0.0.1:41921 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x3d513fa7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 1058617424@qtp-556789347-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43371-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1401426245-2597 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2122466805_17 at /127.0.0.1:41648 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43371-SendThread(127.0.0.1:59078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp729762210-2226 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-39046654-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43371.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2146466660_17 at /127.0.0.1:57910 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43371-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase5.apache.org,40455,1689927310649 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43371 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1860723130-2319 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43707 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData-prefix:jenkins-hbase5.apache.org,40455,1689927310649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x72c3dca2-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1933953044_17 at /127.0.0.1:43578 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1860723130-2318 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43707 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x72c3dca2-SendThread(127.0.0.1:59078) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43707 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1933953044_17 at /127.0.0.1:43556 [Receiving block BP-1446181821-172.31.10.131-1689927309871:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 34973 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@26f18ca9 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5-prefix:jenkins-hbase5.apache.org,43707,1689927310815.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp729762210-2228 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5-prefix:jenkins-hbase5.apache.org,35687,1689927311021 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (732687102) connection to localhost/127.0.0.1:43379 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp301661648-2254-acceptor-0@1b053f2b-ServerConnector@20265f8f{HTTP/1.1, (http/1.1)}{0.0.0.0:46631} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43379 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@c52accd java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1446181821-172.31.10.131-1689927309871:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1006839180-2325 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 13632279@qtp-2007715833-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36487 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp301661648-2256 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 0 on default port 43379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:59078): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: IPC Server handler 2 on default port 34973 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59078@0x6d351ff9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/679935031.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@6bb60c37[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1933953044_17 at /127.0.0.1:43484 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43371.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp23771877-2287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x102d08ba-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (732687102) connection to localhost/127.0.0.1:41921 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp23771877-2288 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=843 (was 807) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=499 (was 517), ProcessCount=166 (was 166), AvailableMemoryMB=2422 (was 2667) 2023-07-21 08:15:12,854 WARN [Listener at localhost/43371] hbase.ResourceChecker(130): Thread=564 is superior to 500 2023-07-21 08:15:12,872 INFO [Listener at localhost/43371] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=564, OpenFileDescriptor=843, MaxFileDescriptor=60000, SystemLoadAverage=499, ProcessCount=166, AvailableMemoryMB=2421 2023-07-21 08:15:12,872 WARN [Listener at localhost/43371] hbase.ResourceChecker(130): Thread=564 is superior to 500 2023-07-21 08:15:12,873 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-21 08:15:12,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:12,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:12,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:12,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:12,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:12,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:12,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:12,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:12,882 INFO [RS:3;jenkins-hbase5:42375] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase5.apache.org%2C42375%2C1689927312560, suffix=, logDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,42375,1689927312560, archiveDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/oldWALs, maxLogs=32 2023-07-21 08:15:12,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:12,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:12,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:12,887 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:12,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:12,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:12,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:12,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:12,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:12,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:12,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:12,912 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39077,DS-c33db604-e339-4e74-b00b-ff5e751d21dc,DISK] 2023-07-21 08:15:12,912 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33961,DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8,DISK] 2023-07-21 08:15:12,912 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41537,DS-1ab313a6-3496-4477-9be5-4c4e547c782f,DISK] 2023-07-21 08:15:12,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40455] to rsgroup master 2023-07-21 08:15:12,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:12,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.10.131:51500 deadline: 1689928512913, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. 2023-07-21 08:15:12,914 WARN [Listener at localhost/43371] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:12,923 INFO [Listener at localhost/43371] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:12,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:12,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:12,924 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:35687, jenkins-hbase5.apache.org:42375, jenkins-hbase5.apache.org:43707, jenkins-hbase5.apache.org:45347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:12,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:12,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:12,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.HMaster$4(2112): Client=jenkins//172.31.10.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:12,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-21 08:15:12,930 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:15:12,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(700): Client=jenkins//172.31.10.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-21 08:15:12,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 08:15:12,932 INFO [RS:3;jenkins-hbase5:42375] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/WALs/jenkins-hbase5.apache.org,42375,1689927312560/jenkins-hbase5.apache.org%2C42375%2C1689927312560.1689927312882 2023-07-21 08:15:12,932 DEBUG [RS:3;jenkins-hbase5:42375] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39077,DS-c33db604-e339-4e74-b00b-ff5e751d21dc,DISK], DatanodeInfoWithStorage[127.0.0.1:41537,DS-1ab313a6-3496-4477-9be5-4c4e547c782f,DISK], DatanodeInfoWithStorage[127.0.0.1:33961,DS-b3c98c75-9de9-42fd-b00f-da9ba3f219d8,DISK]] 2023-07-21 08:15:12,932 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:12,933 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:12,933 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:12,937 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 08:15:12,938 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659 2023-07-21 08:15:12,939 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659 empty. 2023-07-21 08:15:12,940 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659 2023-07-21 08:15:12,940 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-21 08:15:12,955 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-21 08:15:12,957 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => ef0bdb2fc1a95ba49cf11fafa25e6659, NAME => 't1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp 2023-07-21 08:15:12,974 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:12,974 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing ef0bdb2fc1a95ba49cf11fafa25e6659, disabling compactions & flushes 2023-07-21 08:15:12,974 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659. 2023-07-21 08:15:12,974 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659. 2023-07-21 08:15:12,974 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659. after waiting 0 ms 2023-07-21 08:15:12,974 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659. 2023-07-21 08:15:12,974 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659. 2023-07-21 08:15:12,974 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for ef0bdb2fc1a95ba49cf11fafa25e6659: 2023-07-21 08:15:12,977 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 08:15:12,978 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689927312978"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927312978"}]},"ts":"1689927312978"} 2023-07-21 08:15:12,980 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 08:15:12,980 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 08:15:12,980 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927312980"}]},"ts":"1689927312980"} 2023-07-21 08:15:12,982 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-21 08:15:12,985 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase5.apache.org=0} racks are {/default-rack=0} 2023-07-21 08:15:12,985 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 08:15:12,985 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 08:15:12,985 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 08:15:12,985 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 08:15:12,985 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 08:15:12,986 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=ef0bdb2fc1a95ba49cf11fafa25e6659, ASSIGN}] 2023-07-21 08:15:12,986 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=ef0bdb2fc1a95ba49cf11fafa25e6659, ASSIGN 2023-07-21 08:15:12,987 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=ef0bdb2fc1a95ba49cf11fafa25e6659, ASSIGN; state=OFFLINE, location=jenkins-hbase5.apache.org,42375,1689927312560; forceNewPlan=false, retain=false 2023-07-21 08:15:13,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 08:15:13,138 INFO [jenkins-hbase5:40455] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 08:15:13,139 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=ef0bdb2fc1a95ba49cf11fafa25e6659, regionState=OPENING, regionLocation=jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:13,139 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689927313139"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927313139"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927313139"}]},"ts":"1689927313139"} 2023-07-21 08:15:13,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure ef0bdb2fc1a95ba49cf11fafa25e6659, server=jenkins-hbase5.apache.org,42375,1689927312560}] 2023-07-21 08:15:13,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 08:15:13,293 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:13,293 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 08:15:13,294 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 172.31.10.131:38830, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 08:15:13,298 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(130): Open t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659. 2023-07-21 08:15:13,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ef0bdb2fc1a95ba49cf11fafa25e6659, NAME => 't1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659.', STARTKEY => '', ENDKEY => ''} 2023-07-21 08:15:13,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 ef0bdb2fc1a95ba49cf11fafa25e6659 2023-07-21 08:15:13,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(866): Instantiated t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 08:15:13,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7894): checking encryption for ef0bdb2fc1a95ba49cf11fafa25e6659 2023-07-21 08:15:13,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(7897): checking classloading for ef0bdb2fc1a95ba49cf11fafa25e6659 2023-07-21 08:15:13,300 INFO [StoreOpener-ef0bdb2fc1a95ba49cf11fafa25e6659-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region ef0bdb2fc1a95ba49cf11fafa25e6659 2023-07-21 08:15:13,301 DEBUG [StoreOpener-ef0bdb2fc1a95ba49cf11fafa25e6659-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659/cf1 2023-07-21 08:15:13,301 DEBUG [StoreOpener-ef0bdb2fc1a95ba49cf11fafa25e6659-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659/cf1 2023-07-21 08:15:13,301 INFO [StoreOpener-ef0bdb2fc1a95ba49cf11fafa25e6659-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ef0bdb2fc1a95ba49cf11fafa25e6659 columnFamilyName cf1 2023-07-21 08:15:13,302 INFO [StoreOpener-ef0bdb2fc1a95ba49cf11fafa25e6659-1] regionserver.HStore(310): Store=ef0bdb2fc1a95ba49cf11fafa25e6659/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 08:15:13,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659 2023-07-21 08:15:13,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659 2023-07-21 08:15:13,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1055): writing seq id for ef0bdb2fc1a95ba49cf11fafa25e6659 2023-07-21 08:15:13,308 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 08:15:13,309 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1072): Opened ef0bdb2fc1a95ba49cf11fafa25e6659; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10402293440, jitterRate=-0.03121092915534973}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 08:15:13,309 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(965): Region open journal for ef0bdb2fc1a95ba49cf11fafa25e6659: 2023-07-21 08:15:13,309 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659., pid=14, masterSystemTime=1689927313293 2023-07-21 08:15:13,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659. 2023-07-21 08:15:13,315 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase5:0-0] handler.AssignRegionHandler(158): Opened t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659. 2023-07-21 08:15:13,316 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=ef0bdb2fc1a95ba49cf11fafa25e6659, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:13,316 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689927313316"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689927313316"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689927313316"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689927313316"}]},"ts":"1689927313316"} 2023-07-21 08:15:13,318 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-21 08:15:13,318 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure ef0bdb2fc1a95ba49cf11fafa25e6659, server=jenkins-hbase5.apache.org,42375,1689927312560 in 177 msec 2023-07-21 08:15:13,320 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 08:15:13,320 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=ef0bdb2fc1a95ba49cf11fafa25e6659, ASSIGN in 332 msec 2023-07-21 08:15:13,320 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 08:15:13,320 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927313320"}]},"ts":"1689927313320"} 2023-07-21 08:15:13,321 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-21 08:15:13,323 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 08:15:13,324 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 396 msec 2023-07-21 08:15:13,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 08:15:13,534 INFO [Listener at localhost/43371] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-21 08:15:13,535 DEBUG [Listener at localhost/43371] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-21 08:15:13,535 INFO [Listener at localhost/43371] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:13,537 INFO [Listener at localhost/43371] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-21 08:15:13,537 INFO [Listener at localhost/43371] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:13,537 INFO [Listener at localhost/43371] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-21 08:15:13,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.HMaster$4(2112): Client=jenkins//172.31.10.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 08:15:13,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-21 08:15:13,542 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 08:15:13,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-21 08:15:13,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 172.31.10.131:51500 deadline: 1689927373539, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-21 08:15:13,544 INFO [Listener at localhost/43371] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:13,545 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=5 msec 2023-07-21 08:15:13,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:13,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:13,646 INFO [Listener at localhost/43371] client.HBaseAdmin$15(890): Started disable of t1 2023-07-21 08:15:13,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.HMaster$11(2418): Client=jenkins//172.31.10.131 disable t1 2023-07-21 08:15:13,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-21 08:15:13,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 08:15:13,651 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927313650"}]},"ts":"1689927313650"} 2023-07-21 08:15:13,652 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-21 08:15:13,656 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-21 08:15:13,657 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=ef0bdb2fc1a95ba49cf11fafa25e6659, UNASSIGN}] 2023-07-21 08:15:13,657 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=ef0bdb2fc1a95ba49cf11fafa25e6659, UNASSIGN 2023-07-21 08:15:13,658 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=ef0bdb2fc1a95ba49cf11fafa25e6659, regionState=CLOSING, regionLocation=jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:13,658 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689927313658"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689927313658"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689927313658"}]},"ts":"1689927313658"} 2023-07-21 08:15:13,659 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure ef0bdb2fc1a95ba49cf11fafa25e6659, server=jenkins-hbase5.apache.org,42375,1689927312560}] 2023-07-21 08:15:13,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 08:15:13,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(111): Close ef0bdb2fc1a95ba49cf11fafa25e6659 2023-07-21 08:15:13,811 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing ef0bdb2fc1a95ba49cf11fafa25e6659, disabling compactions & flushes 2023-07-21 08:15:13,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659. 2023-07-21 08:15:13,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659. 2023-07-21 08:15:13,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659. after waiting 0 ms 2023-07-21 08:15:13,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659. 2023-07-21 08:15:13,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 08:15:13,818 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659. 2023-07-21 08:15:13,818 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for ef0bdb2fc1a95ba49cf11fafa25e6659: 2023-07-21 08:15:13,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.UnassignRegionHandler(149): Closed ef0bdb2fc1a95ba49cf11fafa25e6659 2023-07-21 08:15:13,820 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=ef0bdb2fc1a95ba49cf11fafa25e6659, regionState=CLOSED 2023-07-21 08:15:13,821 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689927313820"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689927313820"}]},"ts":"1689927313820"} 2023-07-21 08:15:13,824 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-21 08:15:13,824 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure ef0bdb2fc1a95ba49cf11fafa25e6659, server=jenkins-hbase5.apache.org,42375,1689927312560 in 163 msec 2023-07-21 08:15:13,826 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-21 08:15:13,826 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=ef0bdb2fc1a95ba49cf11fafa25e6659, UNASSIGN in 167 msec 2023-07-21 08:15:13,826 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689927313826"}]},"ts":"1689927313826"} 2023-07-21 08:15:13,828 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-21 08:15:13,829 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-21 08:15:13,831 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 184 msec 2023-07-21 08:15:13,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 08:15:13,952 INFO [Listener at localhost/43371] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-21 08:15:13,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.HMaster$5(2228): Client=jenkins//172.31.10.131 delete t1 2023-07-21 08:15:13,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-21 08:15:13,955 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-21 08:15:13,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-21 08:15:13,956 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-21 08:15:13,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:13,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:13,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:13,960 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659 2023-07-21 08:15:13,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 08:15:13,963 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659/cf1, FileablePath, hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659/recovered.edits] 2023-07-21 08:15:13,970 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659/recovered.edits/4.seqid to hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/archive/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659/recovered.edits/4.seqid 2023-07-21 08:15:13,970 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/.tmp/data/default/t1/ef0bdb2fc1a95ba49cf11fafa25e6659 2023-07-21 08:15:13,970 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-21 08:15:13,973 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-21 08:15:13,974 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-21 08:15:13,976 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-21 08:15:13,977 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-21 08:15:13,977 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-21 08:15:13,977 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689927313977"}]},"ts":"9223372036854775807"} 2023-07-21 08:15:13,979 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 08:15:13,979 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ef0bdb2fc1a95ba49cf11fafa25e6659, NAME => 't1,,1689927312927.ef0bdb2fc1a95ba49cf11fafa25e6659.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 08:15:13,979 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-21 08:15:13,979 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689927313979"}]},"ts":"9223372036854775807"} 2023-07-21 08:15:13,980 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-21 08:15:13,982 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-21 08:15:13,983 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 29 msec 2023-07-21 08:15:14,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 08:15:14,062 INFO [Listener at localhost/43371] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-21 08:15:14,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:14,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:14,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:14,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:14,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:14,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:14,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:14,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:14,077 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:14,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:14,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:14,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:14,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:14,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40455] to rsgroup master 2023-07-21 08:15:14,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:14,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.10.131:51500 deadline: 1689928514086, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. 2023-07-21 08:15:14,087 WARN [Listener at localhost/43371] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:14,090 INFO [Listener at localhost/43371] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:14,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,092 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:35687, jenkins-hbase5.apache.org:42375, jenkins-hbase5.apache.org:43707, jenkins-hbase5.apache.org:45347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:14,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:14,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:14,111 INFO [Listener at localhost/43371] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=573 (was 564) - Thread LEAK? -, OpenFileDescriptor=847 (was 843) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=499 (was 499), ProcessCount=166 (was 166), AvailableMemoryMB=2418 (was 2421) 2023-07-21 08:15:14,111 WARN [Listener at localhost/43371] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-21 08:15:14,128 INFO [Listener at localhost/43371] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=573, OpenFileDescriptor=847, MaxFileDescriptor=60000, SystemLoadAverage=499, ProcessCount=166, AvailableMemoryMB=2418 2023-07-21 08:15:14,129 WARN [Listener at localhost/43371] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-21 08:15:14,129 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-21 08:15:14,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:14,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:14,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:14,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:14,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:14,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:14,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:14,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:14,142 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:14,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:14,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:14,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:14,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:14,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40455] to rsgroup master 2023-07-21 08:15:14,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:14,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:51500 deadline: 1689928514151, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. 2023-07-21 08:15:14,152 WARN [Listener at localhost/43371] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:14,153 INFO [Listener at localhost/43371] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:14,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,154 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:35687, jenkins-hbase5.apache.org:42375, jenkins-hbase5.apache.org:43707, jenkins-hbase5.apache.org:45347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:14,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:14,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:14,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-21 08:15:14,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:15:14,157 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-21 08:15:14,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-21 08:15:14,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 08:15:14,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:14,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:14,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:14,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:14,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:14,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:14,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:14,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:14,174 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:14,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:14,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:14,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:14,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:14,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40455] to rsgroup master 2023-07-21 08:15:14,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:14,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:51500 deadline: 1689928514183, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. 2023-07-21 08:15:14,184 WARN [Listener at localhost/43371] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:14,185 INFO [Listener at localhost/43371] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:14,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,186 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:35687, jenkins-hbase5.apache.org:42375, jenkins-hbase5.apache.org:43707, jenkins-hbase5.apache.org:45347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:14,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:14,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:14,208 INFO [Listener at localhost/43371] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=575 (was 573) - Thread LEAK? -, OpenFileDescriptor=847 (was 847), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=499 (was 499), ProcessCount=166 (was 166), AvailableMemoryMB=2419 (was 2418) - AvailableMemoryMB LEAK? - 2023-07-21 08:15:14,208 WARN [Listener at localhost/43371] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-21 08:15:14,226 INFO [Listener at localhost/43371] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=575, OpenFileDescriptor=847, MaxFileDescriptor=60000, SystemLoadAverage=499, ProcessCount=166, AvailableMemoryMB=2417 2023-07-21 08:15:14,226 WARN [Listener at localhost/43371] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-21 08:15:14,227 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-21 08:15:14,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:14,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:14,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:14,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:14,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:14,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:14,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:14,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:14,240 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:14,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:14,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:14,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:14,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:14,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40455] to rsgroup master 2023-07-21 08:15:14,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:14,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:51500 deadline: 1689928514251, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. 2023-07-21 08:15:14,252 WARN [Listener at localhost/43371] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:14,254 INFO [Listener at localhost/43371] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:14,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,255 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:35687, jenkins-hbase5.apache.org:42375, jenkins-hbase5.apache.org:43707, jenkins-hbase5.apache.org:45347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:14,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:14,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:14,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:14,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:14,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:14,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:14,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:14,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:14,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:14,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:14,272 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:14,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:14,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:14,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:14,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:14,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40455] to rsgroup master 2023-07-21 08:15:14,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:14,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:51500 deadline: 1689928514282, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. 2023-07-21 08:15:14,283 WARN [Listener at localhost/43371] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:14,284 INFO [Listener at localhost/43371] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:14,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,285 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:35687, jenkins-hbase5.apache.org:42375, jenkins-hbase5.apache.org:43707, jenkins-hbase5.apache.org:45347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:14,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:14,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:14,303 INFO [Listener at localhost/43371] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576 (was 575) - Thread LEAK? -, OpenFileDescriptor=847 (was 847), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=499 (was 499), ProcessCount=166 (was 166), AvailableMemoryMB=2418 (was 2417) - AvailableMemoryMB LEAK? - 2023-07-21 08:15:14,303 WARN [Listener at localhost/43371] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-21 08:15:14,319 INFO [Listener at localhost/43371] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576, OpenFileDescriptor=847, MaxFileDescriptor=60000, SystemLoadAverage=499, ProcessCount=166, AvailableMemoryMB=2418 2023-07-21 08:15:14,319 WARN [Listener at localhost/43371] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-21 08:15:14,319 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-21 08:15:14,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:14,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:14,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:14,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:14,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:14,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:14,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:14,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:14,331 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:14,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:14,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:14,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:14,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:14,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40455] to rsgroup master 2023-07-21 08:15:14,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:14,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:51500 deadline: 1689928514339, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. 2023-07-21 08:15:14,339 WARN [Listener at localhost/43371] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:14,341 INFO [Listener at localhost/43371] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:14,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,342 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:35687, jenkins-hbase5.apache.org:42375, jenkins-hbase5.apache.org:43707, jenkins-hbase5.apache.org:45347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:14,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:14,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:14,343 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-21 08:15:14,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup Group_foo 2023-07-21 08:15:14,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-21 08:15:14,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:14,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 08:15:14,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:14,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.HMaster$15(3014): Client=jenkins//172.31.10.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-21 08:15:14,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-21 08:15:14,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 08:15:14,361 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 08:15:14,364 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 8 msec 2023-07-21 08:15:14,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 08:15:14,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup Group_foo 2023-07-21 08:15:14,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:14,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.10.131:51500 deadline: 1689928514460, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-21 08:15:14,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.HMaster$16(3053): Client=jenkins//172.31.10.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-21 08:15:14,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-21 08:15:14,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 08:15:14,480 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 08:15:14,481 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 14 msec 2023-07-21 08:15:14,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 08:15:14,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup Group_anotherGroup 2023-07-21 08:15:14,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-21 08:15:14,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-21 08:15:14,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:14,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 08:15:14,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:14,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.HMaster$17(3086): Client=jenkins//172.31.10.131 delete Group_foo 2023-07-21 08:15:14,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 08:15:14,610 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 08:15:14,612 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 08:15:14,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-21 08:15:14,614 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 08:15:14,615 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 08:15:14,615 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 08:15:14,615 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 08:15:14,617 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 08:15:14,618 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-21 08:15:14,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-21 08:15:14,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup Group_foo 2023-07-21 08:15:14,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-21 08:15:14,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:14,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 08:15:14,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:14,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:14,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.10.131:51500 deadline: 1689927374724, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-21 08:15:14,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:14,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:14,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:14,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:14,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:14,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup Group_anotherGroup 2023-07-21 08:15:14,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:14,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 08:15:14,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:14,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.10.131 move tables [] to rsgroup default 2023-07-21 08:15:14,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 08:15:14,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 08:15:14,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [] to rsgroup default 2023-07-21 08:15:14,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 08:15:14,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.10.131 remove rsgroup master 2023-07-21 08:15:14,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 08:15:14,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 08:15:14,746 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 08:15:14,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.10.131 add rsgroup master 2023-07-21 08:15:14,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 08:15:14,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 08:15:14,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 08:15:14,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 08:15:14,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.10.131 move servers [jenkins-hbase5.apache.org:40455] to rsgroup master 2023-07-21 08:15:14,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 08:15:14,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.10.131:51500 deadline: 1689928514765, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. 2023-07-21 08:15:14,766 WARN [Listener at localhost/43371] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase5.apache.org:40455 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 08:15:14,768 INFO [Listener at localhost/43371] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 08:15:14,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.10.131 list rsgroup 2023-07-21 08:15:14,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 08:15:14,769 INFO [Listener at localhost/43371] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase5.apache.org:35687, jenkins-hbase5.apache.org:42375, jenkins-hbase5.apache.org:43707, jenkins-hbase5.apache.org:45347], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 08:15:14,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.10.131 initiates rsgroup info retrieval, group=default 2023-07-21 08:15:14,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40455] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.10.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 08:15:14,794 INFO [Listener at localhost/43371] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576 (was 576), OpenFileDescriptor=847 (was 847), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=499 (was 499), ProcessCount=166 (was 166), AvailableMemoryMB=2405 (was 2418) 2023-07-21 08:15:14,794 WARN [Listener at localhost/43371] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-21 08:15:14,794 INFO [Listener at localhost/43371] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 08:15:14,795 INFO [Listener at localhost/43371] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 08:15:14,795 DEBUG [Listener at localhost/43371] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x22cc8e8f to 127.0.0.1:59078 2023-07-21 08:15:14,795 DEBUG [Listener at localhost/43371] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:14,795 DEBUG [Listener at localhost/43371] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 08:15:14,795 DEBUG [Listener at localhost/43371] util.JVMClusterUtil(257): Found active master hash=167457587, stopped=false 2023-07-21 08:15:14,795 DEBUG [Listener at localhost/43371] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 08:15:14,795 DEBUG [Listener at localhost/43371] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 08:15:14,795 INFO [Listener at localhost/43371] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase5.apache.org,40455,1689927310649 2023-07-21 08:15:14,797 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:14,797 INFO [Listener at localhost/43371] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 08:15:14,797 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:14,797 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:14,797 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:14,797 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:14,797 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 08:15:14,798 DEBUG [Listener at localhost/43371] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x46b5b40c to 127.0.0.1:59078 2023-07-21 08:15:14,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:14,798 DEBUG [Listener at localhost/43371] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:14,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:14,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:14,798 INFO [Listener at localhost/43371] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,43707,1689927310815' ***** 2023-07-21 08:15:14,798 INFO [Listener at localhost/43371] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 08:15:14,799 INFO [Listener at localhost/43371] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,35687,1689927311021' ***** 2023-07-21 08:15:14,799 INFO [Listener at localhost/43371] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 08:15:14,799 INFO [Listener at localhost/43371] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,45347,1689927311169' ***** 2023-07-21 08:15:14,799 INFO [Listener at localhost/43371] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 08:15:14,799 INFO [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:14,799 INFO [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:14,799 INFO [RS:2;jenkins-hbase5:45347] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:14,799 INFO [Listener at localhost/43371] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,42375,1689927312560' ***** 2023-07-21 08:15:14,801 INFO [Listener at localhost/43371] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 08:15:14,801 INFO [RS:3;jenkins-hbase5:42375] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:14,802 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:14,802 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 08:15:14,817 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 08:15:14,817 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 08:15:14,817 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 08:15:14,817 INFO [RS:1;jenkins-hbase5:35687] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7aeb7de4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:14,817 INFO [RS:0;jenkins-hbase5:43707] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1ace3e95{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:14,820 INFO [RS:3;jenkins-hbase5:42375] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7db6a9e1{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:14,820 INFO [regionserver/jenkins-hbase5:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:14,820 INFO [RS:2;jenkins-hbase5:45347] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@63be1b37{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 08:15:14,820 INFO [regionserver/jenkins-hbase5:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:14,820 INFO [RS:3;jenkins-hbase5:42375] server.AbstractConnector(383): Stopped ServerConnector@5cae68bf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:14,820 INFO [RS:2;jenkins-hbase5:45347] server.AbstractConnector(383): Stopped ServerConnector@390d766a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:14,820 INFO [RS:2;jenkins-hbase5:45347] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:14,820 INFO [regionserver/jenkins-hbase5:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:14,821 INFO [RS:2;jenkins-hbase5:45347] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6c4d16d0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:14,820 INFO [RS:1;jenkins-hbase5:35687] server.AbstractConnector(383): Stopped ServerConnector@4e2832c6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:14,821 INFO [RS:1;jenkins-hbase5:35687] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:14,820 INFO [RS:3;jenkins-hbase5:42375] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:14,822 INFO [RS:2;jenkins-hbase5:45347] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@352219ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:14,822 INFO [RS:1;jenkins-hbase5:35687] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2657b892{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:14,820 INFO [RS:0;jenkins-hbase5:43707] server.AbstractConnector(383): Stopped ServerConnector@20265f8f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:14,824 INFO [RS:1;jenkins-hbase5:35687] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4cdfbf9d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:14,824 INFO [RS:0;jenkins-hbase5:43707] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:14,823 INFO [RS:3;jenkins-hbase5:42375] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41a216b5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:14,824 INFO [RS:2;jenkins-hbase5:45347] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 08:15:14,826 INFO [RS:1;jenkins-hbase5:35687] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 08:15:14,825 INFO [RS:3;jenkins-hbase5:42375] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2014f237{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:14,827 INFO [RS:1;jenkins-hbase5:35687] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 08:15:14,827 INFO [RS:0;jenkins-hbase5:43707] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1be9377a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:14,827 INFO [RS:1;jenkins-hbase5:35687] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 08:15:14,827 INFO [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(3305): Received CLOSE for 197ea5a80778b8c2adce4be318829b31 2023-07-21 08:15:14,836 INFO [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:14,837 DEBUG [RS:1;jenkins-hbase5:35687] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6d351ff9 to 127.0.0.1:59078 2023-07-21 08:15:14,837 DEBUG [RS:1;jenkins-hbase5:35687] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:14,837 INFO [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 08:15:14,837 DEBUG [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(1478): Online Regions={197ea5a80778b8c2adce4be318829b31=hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31.} 2023-07-21 08:15:14,837 INFO [RS:3;jenkins-hbase5:42375] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 08:15:14,837 DEBUG [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(1504): Waiting on 197ea5a80778b8c2adce4be318829b31 2023-07-21 08:15:14,837 INFO [RS:3;jenkins-hbase5:42375] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 08:15:14,837 INFO [RS:0;jenkins-hbase5:43707] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7dbd2218{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:14,837 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 08:15:14,837 INFO [RS:3;jenkins-hbase5:42375] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 08:15:14,838 INFO [RS:3;jenkins-hbase5:42375] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:14,838 DEBUG [RS:3;jenkins-hbase5:42375] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3d513fa7 to 127.0.0.1:59078 2023-07-21 08:15:14,838 DEBUG [RS:3;jenkins-hbase5:42375] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:14,838 INFO [RS:3;jenkins-hbase5:42375] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,42375,1689927312560; all regions closed. 2023-07-21 08:15:14,838 INFO [RS:0;jenkins-hbase5:43707] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 08:15:14,837 INFO [RS:2;jenkins-hbase5:45347] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 08:15:14,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 197ea5a80778b8c2adce4be318829b31, disabling compactions & flushes 2023-07-21 08:15:14,838 INFO [RS:2;jenkins-hbase5:45347] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 08:15:14,838 INFO [RS:2;jenkins-hbase5:45347] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:14,838 DEBUG [RS:2;jenkins-hbase5:45347] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6c970812 to 127.0.0.1:59078 2023-07-21 08:15:14,838 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. 2023-07-21 08:15:14,839 DEBUG [RS:2;jenkins-hbase5:45347] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:14,839 INFO [RS:2;jenkins-hbase5:45347] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,45347,1689927311169; all regions closed. 2023-07-21 08:15:14,839 INFO [RS:0;jenkins-hbase5:43707] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 08:15:14,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. 2023-07-21 08:15:14,839 INFO [RS:0;jenkins-hbase5:43707] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 08:15:14,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. after waiting 0 ms 2023-07-21 08:15:14,839 INFO [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(3305): Received CLOSE for d956cbe28a1cb70c40a58098938f8144 2023-07-21 08:15:14,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. 2023-07-21 08:15:14,839 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2745): Flushing 197ea5a80778b8c2adce4be318829b31 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-21 08:15:14,844 INFO [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:14,844 DEBUG [RS:0;jenkins-hbase5:43707] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3fba5411 to 127.0.0.1:59078 2023-07-21 08:15:14,844 DEBUG [RS:0;jenkins-hbase5:43707] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:14,844 INFO [RS:0;jenkins-hbase5:43707] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 08:15:14,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing d956cbe28a1cb70c40a58098938f8144, disabling compactions & flushes 2023-07-21 08:15:14,844 INFO [RS:0;jenkins-hbase5:43707] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 08:15:14,844 INFO [RS:0;jenkins-hbase5:43707] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 08:15:14,844 INFO [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 08:15:14,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. 2023-07-21 08:15:14,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. 2023-07-21 08:15:14,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. after waiting 0 ms 2023-07-21 08:15:14,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. 2023-07-21 08:15:14,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2745): Flushing d956cbe28a1cb70c40a58098938f8144 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-21 08:15:14,847 INFO [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 08:15:14,847 DEBUG [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, d956cbe28a1cb70c40a58098938f8144=hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144.} 2023-07-21 08:15:14,847 DEBUG [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(1504): Waiting on 1588230740, d956cbe28a1cb70c40a58098938f8144 2023-07-21 08:15:14,849 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 08:15:14,849 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 08:15:14,849 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 08:15:14,849 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 08:15:14,849 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 08:15:14,849 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-21 08:15:14,859 DEBUG [RS:3;jenkins-hbase5:42375] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/oldWALs 2023-07-21 08:15:14,859 INFO [RS:3;jenkins-hbase5:42375] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C42375%2C1689927312560:(num 1689927312882) 2023-07-21 08:15:14,859 DEBUG [RS:3;jenkins-hbase5:42375] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:14,859 INFO [RS:3;jenkins-hbase5:42375] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:14,861 INFO [regionserver/jenkins-hbase5:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:14,865 INFO [RS:3;jenkins-hbase5:42375] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase5:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 08:15:14,867 DEBUG [RS:2;jenkins-hbase5:45347] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/oldWALs 2023-07-21 08:15:14,867 INFO [RS:2;jenkins-hbase5:45347] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C45347%2C1689927311169:(num 1689927311750) 2023-07-21 08:15:14,867 DEBUG [RS:2;jenkins-hbase5:45347] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:14,867 INFO [RS:2;jenkins-hbase5:45347] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:14,868 INFO [RS:3;jenkins-hbase5:42375] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 08:15:14,868 INFO [RS:2;jenkins-hbase5:45347] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase5:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 08:15:14,868 INFO [RS:3;jenkins-hbase5:42375] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 08:15:14,868 INFO [regionserver/jenkins-hbase5:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:14,868 INFO [regionserver/jenkins-hbase5:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:14,868 INFO [RS:3;jenkins-hbase5:42375] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 08:15:14,868 INFO [RS:2;jenkins-hbase5:45347] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 08:15:14,868 INFO [RS:2;jenkins-hbase5:45347] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 08:15:14,868 INFO [RS:2;jenkins-hbase5:45347] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 08:15:14,873 INFO [RS:3;jenkins-hbase5:42375] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:42375 2023-07-21 08:15:14,874 INFO [RS:2;jenkins-hbase5:45347] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:45347 2023-07-21 08:15:14,876 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:14,876 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:14,876 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:14,876 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:14,876 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:14,876 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:14,876 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,42375,1689927312560 2023-07-21 08:15:14,876 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:14,876 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:14,877 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase5.apache.org,42375,1689927312560] 2023-07-21 08:15:14,877 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase5.apache.org,42375,1689927312560; numProcessing=1 2023-07-21 08:15:14,878 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase5.apache.org,42375,1689927312560 already deleted, retry=false 2023-07-21 08:15:14,878 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase5.apache.org,42375,1689927312560 expired; onlineServers=3 2023-07-21 08:15:14,881 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:14,881 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:14,881 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:14,881 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:14,881 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,45347,1689927311169 2023-07-21 08:15:14,881 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase5.apache.org,45347,1689927311169] 2023-07-21 08:15:14,881 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase5.apache.org,45347,1689927311169; numProcessing=2 2023-07-21 08:15:14,884 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase5.apache.org,45347,1689927311169 already deleted, retry=false 2023-07-21 08:15:14,884 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase5.apache.org,45347,1689927311169 expired; onlineServers=2 2023-07-21 08:15:14,888 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/namespace/197ea5a80778b8c2adce4be318829b31/.tmp/info/5e6c6e1e30ab405abc5f0c5f15d383d4 2023-07-21 08:15:14,899 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/rsgroup/d956cbe28a1cb70c40a58098938f8144/.tmp/m/73574d1e0f404336b091f9e02ab5e697 2023-07-21 08:15:14,901 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/.tmp/info/4d72113f0808472aaa7e77f6a8765018 2023-07-21 08:15:14,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5e6c6e1e30ab405abc5f0c5f15d383d4 2023-07-21 08:15:14,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 73574d1e0f404336b091f9e02ab5e697 2023-07-21 08:15:14,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/namespace/197ea5a80778b8c2adce4be318829b31/.tmp/info/5e6c6e1e30ab405abc5f0c5f15d383d4 as hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/namespace/197ea5a80778b8c2adce4be318829b31/info/5e6c6e1e30ab405abc5f0c5f15d383d4 2023-07-21 08:15:14,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/rsgroup/d956cbe28a1cb70c40a58098938f8144/.tmp/m/73574d1e0f404336b091f9e02ab5e697 as hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/rsgroup/d956cbe28a1cb70c40a58098938f8144/m/73574d1e0f404336b091f9e02ab5e697 2023-07-21 08:15:14,909 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4d72113f0808472aaa7e77f6a8765018 2023-07-21 08:15:14,914 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5e6c6e1e30ab405abc5f0c5f15d383d4 2023-07-21 08:15:14,914 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/namespace/197ea5a80778b8c2adce4be318829b31/info/5e6c6e1e30ab405abc5f0c5f15d383d4, entries=3, sequenceid=9, filesize=5.0 K 2023-07-21 08:15:14,915 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 197ea5a80778b8c2adce4be318829b31 in 76ms, sequenceid=9, compaction requested=false 2023-07-21 08:15:14,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 73574d1e0f404336b091f9e02ab5e697 2023-07-21 08:15:14,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/rsgroup/d956cbe28a1cb70c40a58098938f8144/m/73574d1e0f404336b091f9e02ab5e697, entries=12, sequenceid=29, filesize=5.4 K 2023-07-21 08:15:14,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for d956cbe28a1cb70c40a58098938f8144 in 77ms, sequenceid=29, compaction requested=false 2023-07-21 08:15:14,925 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/namespace/197ea5a80778b8c2adce4be318829b31/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-21 08:15:14,927 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. 2023-07-21 08:15:14,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 197ea5a80778b8c2adce4be318829b31: 2023-07-21 08:15:14,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689927311913.197ea5a80778b8c2adce4be318829b31. 2023-07-21 08:15:14,932 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/.tmp/rep_barrier/a5e6f23886f545c2b76092fb607935a4 2023-07-21 08:15:14,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/rsgroup/d956cbe28a1cb70c40a58098938f8144/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-21 08:15:14,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 08:15:14,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. 2023-07-21 08:15:14,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for d956cbe28a1cb70c40a58098938f8144: 2023-07-21 08:15:14,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase5:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689927312086.d956cbe28a1cb70c40a58098938f8144. 2023-07-21 08:15:14,937 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a5e6f23886f545c2b76092fb607935a4 2023-07-21 08:15:14,946 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/.tmp/table/581fe77a43f44737a50244ee213baebe 2023-07-21 08:15:14,950 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 581fe77a43f44737a50244ee213baebe 2023-07-21 08:15:14,951 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/.tmp/info/4d72113f0808472aaa7e77f6a8765018 as hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/info/4d72113f0808472aaa7e77f6a8765018 2023-07-21 08:15:14,955 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4d72113f0808472aaa7e77f6a8765018 2023-07-21 08:15:14,955 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/info/4d72113f0808472aaa7e77f6a8765018, entries=22, sequenceid=26, filesize=7.3 K 2023-07-21 08:15:14,956 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/.tmp/rep_barrier/a5e6f23886f545c2b76092fb607935a4 as hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/rep_barrier/a5e6f23886f545c2b76092fb607935a4 2023-07-21 08:15:14,960 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a5e6f23886f545c2b76092fb607935a4 2023-07-21 08:15:14,961 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/rep_barrier/a5e6f23886f545c2b76092fb607935a4, entries=1, sequenceid=26, filesize=4.9 K 2023-07-21 08:15:14,961 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/.tmp/table/581fe77a43f44737a50244ee213baebe as hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/table/581fe77a43f44737a50244ee213baebe 2023-07-21 08:15:14,965 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 581fe77a43f44737a50244ee213baebe 2023-07-21 08:15:14,966 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HStore(1080): Added hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/table/581fe77a43f44737a50244ee213baebe, entries=6, sequenceid=26, filesize=5.1 K 2023-07-21 08:15:14,966 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 117ms, sequenceid=26, compaction requested=false 2023-07-21 08:15:14,974 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-21 08:15:14,974 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 08:15:14,975 INFO [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 08:15:14,975 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 08:15:14,975 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase5:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 08:15:14,997 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:14,997 INFO [RS:3;jenkins-hbase5:42375] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,42375,1689927312560; zookeeper connection closed. 2023-07-21 08:15:14,997 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x101f28f2781000b, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:14,997 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7143ec7a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7143ec7a 2023-07-21 08:15:15,037 INFO [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,35687,1689927311021; all regions closed. 2023-07-21 08:15:15,042 DEBUG [RS:1;jenkins-hbase5:35687] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/oldWALs 2023-07-21 08:15:15,042 INFO [RS:1;jenkins-hbase5:35687] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C35687%2C1689927311021:(num 1689927311754) 2023-07-21 08:15:15,042 DEBUG [RS:1;jenkins-hbase5:35687] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:15,042 INFO [RS:1;jenkins-hbase5:35687] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:15,042 INFO [RS:1;jenkins-hbase5:35687] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase5:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 08:15:15,042 INFO [RS:1;jenkins-hbase5:35687] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 08:15:15,042 INFO [RS:1;jenkins-hbase5:35687] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 08:15:15,042 INFO [RS:1;jenkins-hbase5:35687] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 08:15:15,042 INFO [regionserver/jenkins-hbase5:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:15,044 INFO [RS:1;jenkins-hbase5:35687] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:35687 2023-07-21 08:15:15,045 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:15,045 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:15,045 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,35687,1689927311021 2023-07-21 08:15:15,047 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase5.apache.org,35687,1689927311021] 2023-07-21 08:15:15,047 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase5.apache.org,35687,1689927311021; numProcessing=3 2023-07-21 08:15:15,047 INFO [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,43707,1689927310815; all regions closed. 2023-07-21 08:15:15,050 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase5.apache.org,35687,1689927311021 already deleted, retry=false 2023-07-21 08:15:15,050 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase5.apache.org,35687,1689927311021 expired; onlineServers=1 2023-07-21 08:15:15,053 DEBUG [RS:0;jenkins-hbase5:43707] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/oldWALs 2023-07-21 08:15:15,053 INFO [RS:0;jenkins-hbase5:43707] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C43707%2C1689927310815.meta:.meta(num 1689927311856) 2023-07-21 08:15:15,058 DEBUG [RS:0;jenkins-hbase5:43707] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/oldWALs 2023-07-21 08:15:15,058 INFO [RS:0;jenkins-hbase5:43707] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase5.apache.org%2C43707%2C1689927310815:(num 1689927311752) 2023-07-21 08:15:15,058 DEBUG [RS:0;jenkins-hbase5:43707] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:15,058 INFO [RS:0;jenkins-hbase5:43707] regionserver.LeaseManager(133): Closed leases 2023-07-21 08:15:15,058 INFO [RS:0;jenkins-hbase5:43707] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase5:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 08:15:15,058 INFO [regionserver/jenkins-hbase5:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:15,059 INFO [RS:0;jenkins-hbase5:43707] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:43707 2023-07-21 08:15:15,061 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase5.apache.org,43707,1689927310815 2023-07-21 08:15:15,061 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 08:15:15,062 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase5.apache.org,43707,1689927310815] 2023-07-21 08:15:15,062 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase5.apache.org,43707,1689927310815; numProcessing=4 2023-07-21 08:15:15,063 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase5.apache.org,43707,1689927310815 already deleted, retry=false 2023-07-21 08:15:15,063 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase5.apache.org,43707,1689927310815 expired; onlineServers=0 2023-07-21 08:15:15,063 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase5.apache.org,40455,1689927310649' ***** 2023-07-21 08:15:15,063 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 08:15:15,064 DEBUG [M:0;jenkins-hbase5:40455] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@dc4b6d4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase5.apache.org/172.31.10.131:0 2023-07-21 08:15:15,064 INFO [M:0;jenkins-hbase5:40455] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 08:15:15,066 INFO [M:0;jenkins-hbase5:40455] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2c2d406c{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 08:15:15,066 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 08:15:15,066 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 08:15:15,067 INFO [M:0;jenkins-hbase5:40455] server.AbstractConnector(383): Stopped ServerConnector@2c973153{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:15,067 INFO [M:0;jenkins-hbase5:40455] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 08:15:15,067 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 08:15:15,067 INFO [M:0;jenkins-hbase5:40455] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@760f93bc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 08:15:15,068 INFO [M:0;jenkins-hbase5:40455] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@65704643{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/hadoop.log.dir/,STOPPED} 2023-07-21 08:15:15,068 INFO [M:0;jenkins-hbase5:40455] regionserver.HRegionServer(1144): stopping server jenkins-hbase5.apache.org,40455,1689927310649 2023-07-21 08:15:15,068 INFO [M:0;jenkins-hbase5:40455] regionserver.HRegionServer(1170): stopping server jenkins-hbase5.apache.org,40455,1689927310649; all regions closed. 2023-07-21 08:15:15,068 DEBUG [M:0;jenkins-hbase5:40455] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 08:15:15,068 INFO [M:0;jenkins-hbase5:40455] master.HMaster(1491): Stopping master jetty server 2023-07-21 08:15:15,069 INFO [M:0;jenkins-hbase5:40455] server.AbstractConnector(383): Stopped ServerConnector@32c4cc7c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 08:15:15,069 DEBUG [M:0;jenkins-hbase5:40455] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 08:15:15,069 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 08:15:15,069 DEBUG [M:0;jenkins-hbase5:40455] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 08:15:15,069 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.small.0-1689927311483] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.small.0-1689927311483,5,FailOnTimeoutGroup] 2023-07-21 08:15:15,069 INFO [M:0;jenkins-hbase5:40455] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 08:15:15,069 INFO [M:0;jenkins-hbase5:40455] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 08:15:15,070 DEBUG [master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.large.0-1689927311483] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase5:0:becomeActiveMaster-HFileCleaner.large.0-1689927311483,5,FailOnTimeoutGroup] 2023-07-21 08:15:15,070 INFO [M:0;jenkins-hbase5:40455] hbase.ChoreService(369): Chore service for: master/jenkins-hbase5:0 had [] on shutdown 2023-07-21 08:15:15,070 DEBUG [M:0;jenkins-hbase5:40455] master.HMaster(1512): Stopping service threads 2023-07-21 08:15:15,070 INFO [M:0;jenkins-hbase5:40455] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 08:15:15,070 ERROR [M:0;jenkins-hbase5:40455] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 08:15:15,070 INFO [M:0;jenkins-hbase5:40455] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 08:15:15,070 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 08:15:15,070 DEBUG [M:0;jenkins-hbase5:40455] zookeeper.ZKUtil(398): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 08:15:15,070 WARN [M:0;jenkins-hbase5:40455] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 08:15:15,070 INFO [M:0;jenkins-hbase5:40455] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 08:15:15,071 INFO [M:0;jenkins-hbase5:40455] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 08:15:15,071 DEBUG [M:0;jenkins-hbase5:40455] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 08:15:15,071 INFO [M:0;jenkins-hbase5:40455] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:15,071 DEBUG [M:0;jenkins-hbase5:40455] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:15,071 DEBUG [M:0;jenkins-hbase5:40455] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 08:15:15,071 DEBUG [M:0;jenkins-hbase5:40455] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:15,071 INFO [M:0;jenkins-hbase5:40455] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.23 KB heapSize=90.66 KB 2023-07-21 08:15:15,081 INFO [M:0;jenkins-hbase5:40455] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.23 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7009c1726e48473f86300763cfc30242 2023-07-21 08:15:15,092 DEBUG [M:0;jenkins-hbase5:40455] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7009c1726e48473f86300763cfc30242 as hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7009c1726e48473f86300763cfc30242 2023-07-21 08:15:15,097 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:15,097 INFO [RS:2;jenkins-hbase5:45347] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,45347,1689927311169; zookeeper connection closed. 2023-07-21 08:15:15,097 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:45347-0x101f28f27810003, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:15,098 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@35a3b134] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@35a3b134 2023-07-21 08:15:15,099 INFO [M:0;jenkins-hbase5:40455] regionserver.HStore(1080): Added hdfs://localhost:43379/user/jenkins/test-data/97c61cb9-7f94-b205-555a-b43e99e4e5b5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7009c1726e48473f86300763cfc30242, entries=22, sequenceid=175, filesize=11.1 K 2023-07-21 08:15:15,099 INFO [M:0;jenkins-hbase5:40455] regionserver.HRegion(2948): Finished flush of dataSize ~76.23 KB/78056, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=175, compaction requested=false 2023-07-21 08:15:15,101 INFO [M:0;jenkins-hbase5:40455] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 08:15:15,101 DEBUG [M:0;jenkins-hbase5:40455] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 08:15:15,106 INFO [M:0;jenkins-hbase5:40455] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 08:15:15,106 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 08:15:15,107 INFO [M:0;jenkins-hbase5:40455] ipc.NettyRpcServer(158): Stopping server on /172.31.10.131:40455 2023-07-21 08:15:15,108 DEBUG [M:0;jenkins-hbase5:40455] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase5.apache.org,40455,1689927310649 already deleted, retry=false 2023-07-21 08:15:15,699 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:15,699 INFO [M:0;jenkins-hbase5:40455] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,40455,1689927310649; zookeeper connection closed. 2023-07-21 08:15:15,699 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): master:40455-0x101f28f27810000, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:15,799 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:15,799 INFO [RS:0;jenkins-hbase5:43707] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,43707,1689927310815; zookeeper connection closed. 2023-07-21 08:15:15,799 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:43707-0x101f28f27810001, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:15,799 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5a5a6858] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5a5a6858 2023-07-21 08:15:15,899 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:15,899 DEBUG [Listener at localhost/43371-EventThread] zookeeper.ZKWatcher(600): regionserver:35687-0x101f28f27810002, quorum=127.0.0.1:59078, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 08:15:15,899 INFO [RS:1;jenkins-hbase5:35687] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase5.apache.org,35687,1689927311021; zookeeper connection closed. 2023-07-21 08:15:15,899 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@457ed762] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@457ed762 2023-07-21 08:15:15,900 INFO [Listener at localhost/43371] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-21 08:15:15,900 WARN [Listener at localhost/43371] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 08:15:15,903 INFO [Listener at localhost/43371] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 08:15:16,007 WARN [BP-1446181821-172.31.10.131-1689927309871 heartbeating to localhost/127.0.0.1:43379] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 08:15:16,007 WARN [BP-1446181821-172.31.10.131-1689927309871 heartbeating to localhost/127.0.0.1:43379] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1446181821-172.31.10.131-1689927309871 (Datanode Uuid 6e1178d7-a133-4550-b089-aaf1db3cb31f) service to localhost/127.0.0.1:43379 2023-07-21 08:15:16,007 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data5/current/BP-1446181821-172.31.10.131-1689927309871] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:16,008 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data6/current/BP-1446181821-172.31.10.131-1689927309871] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:16,009 WARN [Listener at localhost/43371] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 08:15:16,011 INFO [Listener at localhost/43371] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 08:15:16,114 WARN [BP-1446181821-172.31.10.131-1689927309871 heartbeating to localhost/127.0.0.1:43379] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 08:15:16,114 WARN [BP-1446181821-172.31.10.131-1689927309871 heartbeating to localhost/127.0.0.1:43379] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1446181821-172.31.10.131-1689927309871 (Datanode Uuid fa80365e-a45c-4531-9d9b-8c96cdede68d) service to localhost/127.0.0.1:43379 2023-07-21 08:15:16,115 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data3/current/BP-1446181821-172.31.10.131-1689927309871] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:16,115 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data4/current/BP-1446181821-172.31.10.131-1689927309871] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:16,116 WARN [Listener at localhost/43371] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 08:15:16,118 INFO [Listener at localhost/43371] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 08:15:16,221 WARN [BP-1446181821-172.31.10.131-1689927309871 heartbeating to localhost/127.0.0.1:43379] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 08:15:16,221 WARN [BP-1446181821-172.31.10.131-1689927309871 heartbeating to localhost/127.0.0.1:43379] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1446181821-172.31.10.131-1689927309871 (Datanode Uuid 0073a575-61f0-4eae-9026-a6dfe1cb81d3) service to localhost/127.0.0.1:43379 2023-07-21 08:15:16,222 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data1/current/BP-1446181821-172.31.10.131-1689927309871] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:16,222 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5cd50c9e-8cc5-cd88-7152-edebd0070f02/cluster_6fa87f3b-c5be-955e-385c-fbf05a6956d0/dfs/data/data2/current/BP-1446181821-172.31.10.131-1689927309871] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 08:15:16,230 INFO [Listener at localhost/43371] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 08:15:16,344 INFO [Listener at localhost/43371] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 08:15:16,372 INFO [Listener at localhost/43371] hbase.HBaseTestingUtility(1293): Minicluster is down