2023-07-12 22:17:55,830 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729 2023-07-12 22:17:55,852 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-12 22:17:55,869 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 22:17:55,870 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6, deleteOnExit=true 2023-07-12 22:17:55,870 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 22:17:55,871 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/test.cache.data in system properties and HBase conf 2023-07-12 22:17:55,871 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 22:17:55,872 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.log.dir in system properties and HBase conf 2023-07-12 22:17:55,872 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 22:17:55,873 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 22:17:55,873 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 22:17:56,001 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-12 22:17:56,435 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 22:17:56,439 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 22:17:56,440 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 22:17:56,440 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 22:17:56,440 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 22:17:56,441 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 22:17:56,441 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 22:17:56,441 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 22:17:56,442 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 22:17:56,442 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 22:17:56,443 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/nfs.dump.dir in system properties and HBase conf 2023-07-12 22:17:56,443 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/java.io.tmpdir in system properties and HBase conf 2023-07-12 22:17:56,444 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 22:17:56,444 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 22:17:56,444 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 22:17:56,977 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 22:17:56,982 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 22:17:57,310 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-12 22:17:57,500 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-12 22:17:57,514 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 22:17:57,549 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 22:17:57,586 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/java.io.tmpdir/Jetty_localhost_42795_hdfs____h8nym7/webapp 2023-07-12 22:17:57,726 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42795 2023-07-12 22:17:57,738 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 22:17:57,738 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 22:17:58,203 WARN [Listener at localhost/40075] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 22:17:58,304 WARN [Listener at localhost/40075] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 22:17:58,331 WARN [Listener at localhost/40075] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 22:17:58,337 INFO [Listener at localhost/40075] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 22:17:58,342 INFO [Listener at localhost/40075] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/java.io.tmpdir/Jetty_localhost_37219_datanode____inpi5/webapp 2023-07-12 22:17:58,469 INFO [Listener at localhost/40075] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37219 2023-07-12 22:17:58,923 WARN [Listener at localhost/34533] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 22:17:58,936 WARN [Listener at localhost/34533] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 22:17:58,941 WARN [Listener at localhost/34533] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 22:17:58,944 INFO [Listener at localhost/34533] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 22:17:58,951 INFO [Listener at localhost/34533] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/java.io.tmpdir/Jetty_localhost_35801_datanode____1320ou/webapp 2023-07-12 22:17:59,068 INFO [Listener at localhost/34533] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35801 2023-07-12 22:17:59,085 WARN [Listener at localhost/45565] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 22:17:59,113 WARN [Listener at localhost/45565] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 22:17:59,116 WARN [Listener at localhost/45565] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 22:17:59,118 INFO [Listener at localhost/45565] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 22:17:59,125 INFO [Listener at localhost/45565] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/java.io.tmpdir/Jetty_localhost_38673_datanode____.a1zf0/webapp 2023-07-12 22:17:59,254 INFO [Listener at localhost/45565] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38673 2023-07-12 22:17:59,270 WARN [Listener at localhost/40739] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 22:17:59,445 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeef4f1ca22ab6815: Processing first storage report for DS-1de8e13e-6649-4d05-8631-84ff0e590406 from datanode b163f286-5e2c-4ef9-a39b-a7ac5c79bf2e 2023-07-12 22:17:59,447 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeef4f1ca22ab6815: from storage DS-1de8e13e-6649-4d05-8631-84ff0e590406 node DatanodeRegistration(127.0.0.1:46197, datanodeUuid=b163f286-5e2c-4ef9-a39b-a7ac5c79bf2e, infoPort=37175, infoSecurePort=0, ipcPort=45565, storageInfo=lv=-57;cid=testClusterID;nsid=1405435610;c=1689200277053), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-12 22:17:59,447 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa7684a9c5d0451b8: Processing first storage report for DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0 from datanode dab512fc-edd3-4a5f-9792-f93a593c97dd 2023-07-12 22:17:59,447 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa7684a9c5d0451b8: from storage DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0 node DatanodeRegistration(127.0.0.1:33045, datanodeUuid=dab512fc-edd3-4a5f-9792-f93a593c97dd, infoPort=34355, infoSecurePort=0, ipcPort=34533, storageInfo=lv=-57;cid=testClusterID;nsid=1405435610;c=1689200277053), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 22:17:59,448 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcb5180cfba0dda1f: Processing first storage report for DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34 from datanode def65813-8de1-44de-b3c6-d7857e827dfd 2023-07-12 22:17:59,448 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcb5180cfba0dda1f: from storage DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34 node DatanodeRegistration(127.0.0.1:43679, datanodeUuid=def65813-8de1-44de-b3c6-d7857e827dfd, infoPort=37815, infoSecurePort=0, ipcPort=40739, storageInfo=lv=-57;cid=testClusterID;nsid=1405435610;c=1689200277053), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 22:17:59,448 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeef4f1ca22ab6815: Processing first storage report for DS-a4a03064-afe8-4846-9cc5-3f07e1b083f2 from datanode b163f286-5e2c-4ef9-a39b-a7ac5c79bf2e 2023-07-12 22:17:59,448 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeef4f1ca22ab6815: from storage DS-a4a03064-afe8-4846-9cc5-3f07e1b083f2 node DatanodeRegistration(127.0.0.1:46197, datanodeUuid=b163f286-5e2c-4ef9-a39b-a7ac5c79bf2e, infoPort=37175, infoSecurePort=0, ipcPort=45565, storageInfo=lv=-57;cid=testClusterID;nsid=1405435610;c=1689200277053), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 22:17:59,448 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa7684a9c5d0451b8: Processing first storage report for DS-f58ac60d-c300-42a1-95ee-77005e72ad5c from datanode dab512fc-edd3-4a5f-9792-f93a593c97dd 2023-07-12 22:17:59,449 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa7684a9c5d0451b8: from storage DS-f58ac60d-c300-42a1-95ee-77005e72ad5c node DatanodeRegistration(127.0.0.1:33045, datanodeUuid=dab512fc-edd3-4a5f-9792-f93a593c97dd, infoPort=34355, infoSecurePort=0, ipcPort=34533, storageInfo=lv=-57;cid=testClusterID;nsid=1405435610;c=1689200277053), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 22:17:59,449 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcb5180cfba0dda1f: Processing first storage report for DS-75b8e3b7-a5bc-4fe7-9c5a-8b5a6bc505f7 from datanode def65813-8de1-44de-b3c6-d7857e827dfd 2023-07-12 22:17:59,449 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcb5180cfba0dda1f: from storage DS-75b8e3b7-a5bc-4fe7-9c5a-8b5a6bc505f7 node DatanodeRegistration(127.0.0.1:43679, datanodeUuid=def65813-8de1-44de-b3c6-d7857e827dfd, infoPort=37815, infoSecurePort=0, ipcPort=40739, storageInfo=lv=-57;cid=testClusterID;nsid=1405435610;c=1689200277053), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 22:17:59,676 DEBUG [Listener at localhost/40739] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729 2023-07-12 22:17:59,749 INFO [Listener at localhost/40739] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6/zookeeper_0, clientPort=59420, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 22:17:59,776 INFO [Listener at localhost/40739] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59420 2023-07-12 22:17:59,788 INFO [Listener at localhost/40739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:17:59,790 INFO [Listener at localhost/40739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:00,484 INFO [Listener at localhost/40739] util.FSUtils(471): Created version file at hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105 with version=8 2023-07-12 22:18:00,485 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/hbase-staging 2023-07-12 22:18:00,493 DEBUG [Listener at localhost/40739] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 22:18:00,493 DEBUG [Listener at localhost/40739] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 22:18:00,493 DEBUG [Listener at localhost/40739] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 22:18:00,493 DEBUG [Listener at localhost/40739] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 22:18:00,822 INFO [Listener at localhost/40739] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-12 22:18:01,376 INFO [Listener at localhost/40739] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:01,438 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:01,439 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:01,439 INFO [Listener at localhost/40739] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:01,440 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:01,440 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:01,623 INFO [Listener at localhost/40739] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:01,717 DEBUG [Listener at localhost/40739] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-12 22:18:01,852 INFO [Listener at localhost/40739] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34283 2023-07-12 22:18:01,868 INFO [Listener at localhost/40739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:01,871 INFO [Listener at localhost/40739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:01,903 INFO [Listener at localhost/40739] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34283 connecting to ZooKeeper ensemble=127.0.0.1:59420 2023-07-12 22:18:01,963 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:342830x0, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:01,968 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34283-0x1015b9d43b70000 connected 2023-07-12 22:18:02,006 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(164): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:02,007 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(164): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:02,012 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(164): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:02,024 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34283 2023-07-12 22:18:02,024 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34283 2023-07-12 22:18:02,025 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34283 2023-07-12 22:18:02,028 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34283 2023-07-12 22:18:02,030 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34283 2023-07-12 22:18:02,079 INFO [Listener at localhost/40739] log.Log(170): Logging initialized @7152ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-12 22:18:02,249 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:02,250 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:02,251 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:02,253 INFO [Listener at localhost/40739] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 22:18:02,254 INFO [Listener at localhost/40739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:02,254 INFO [Listener at localhost/40739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:02,258 INFO [Listener at localhost/40739] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:02,331 INFO [Listener at localhost/40739] http.HttpServer(1146): Jetty bound to port 36825 2023-07-12 22:18:02,333 INFO [Listener at localhost/40739] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:02,370 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:02,374 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@67509e5d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:02,375 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:02,375 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4d3eef7e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:02,623 INFO [Listener at localhost/40739] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:02,637 INFO [Listener at localhost/40739] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:02,637 INFO [Listener at localhost/40739] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:02,640 INFO [Listener at localhost/40739] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 22:18:02,647 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:02,673 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5aa9eede{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/java.io.tmpdir/jetty-0_0_0_0-36825-hbase-server-2_4_18-SNAPSHOT_jar-_-any-391966575974721496/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 22:18:02,685 INFO [Listener at localhost/40739] server.AbstractConnector(333): Started ServerConnector@3348b71b{HTTP/1.1, (http/1.1)}{0.0.0.0:36825} 2023-07-12 22:18:02,685 INFO [Listener at localhost/40739] server.Server(415): Started @7758ms 2023-07-12 22:18:02,689 INFO [Listener at localhost/40739] master.HMaster(444): hbase.rootdir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105, hbase.cluster.distributed=false 2023-07-12 22:18:02,766 INFO [Listener at localhost/40739] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:02,766 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:02,766 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:02,767 INFO [Listener at localhost/40739] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:02,767 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:02,767 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:02,772 INFO [Listener at localhost/40739] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:02,776 INFO [Listener at localhost/40739] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37441 2023-07-12 22:18:02,778 INFO [Listener at localhost/40739] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 22:18:02,785 DEBUG [Listener at localhost/40739] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 22:18:02,786 INFO [Listener at localhost/40739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:02,788 INFO [Listener at localhost/40739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:02,790 INFO [Listener at localhost/40739] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37441 connecting to ZooKeeper ensemble=127.0.0.1:59420 2023-07-12 22:18:02,797 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:374410x0, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:02,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37441-0x1015b9d43b70001 connected 2023-07-12 22:18:02,798 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(164): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:02,800 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(164): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:02,801 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(164): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:02,801 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37441 2023-07-12 22:18:02,802 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37441 2023-07-12 22:18:02,802 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37441 2023-07-12 22:18:02,803 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37441 2023-07-12 22:18:02,803 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37441 2023-07-12 22:18:02,805 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:02,805 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:02,805 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:02,807 INFO [Listener at localhost/40739] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 22:18:02,807 INFO [Listener at localhost/40739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:02,807 INFO [Listener at localhost/40739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:02,807 INFO [Listener at localhost/40739] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:02,810 INFO [Listener at localhost/40739] http.HttpServer(1146): Jetty bound to port 34807 2023-07-12 22:18:02,810 INFO [Listener at localhost/40739] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:02,813 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:02,814 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@36760574{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:02,814 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:02,815 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4d5f0290{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:02,940 INFO [Listener at localhost/40739] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:02,942 INFO [Listener at localhost/40739] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:02,942 INFO [Listener at localhost/40739] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:02,942 INFO [Listener at localhost/40739] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 22:18:02,944 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:02,948 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3985421f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/java.io.tmpdir/jetty-0_0_0_0-34807-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4051967739291258318/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:02,950 INFO [Listener at localhost/40739] server.AbstractConnector(333): Started ServerConnector@5c7f14cd{HTTP/1.1, (http/1.1)}{0.0.0.0:34807} 2023-07-12 22:18:02,950 INFO [Listener at localhost/40739] server.Server(415): Started @8023ms 2023-07-12 22:18:02,965 INFO [Listener at localhost/40739] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:02,966 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:02,966 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:02,966 INFO [Listener at localhost/40739] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:02,966 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:02,967 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:02,967 INFO [Listener at localhost/40739] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:02,968 INFO [Listener at localhost/40739] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41059 2023-07-12 22:18:02,969 INFO [Listener at localhost/40739] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 22:18:02,972 DEBUG [Listener at localhost/40739] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 22:18:02,973 INFO [Listener at localhost/40739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:02,975 INFO [Listener at localhost/40739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:02,976 INFO [Listener at localhost/40739] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41059 connecting to ZooKeeper ensemble=127.0.0.1:59420 2023-07-12 22:18:02,980 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:410590x0, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:02,981 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41059-0x1015b9d43b70002 connected 2023-07-12 22:18:02,981 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(164): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:02,982 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(164): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:02,983 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(164): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:02,986 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41059 2023-07-12 22:18:02,986 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41059 2023-07-12 22:18:02,987 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41059 2023-07-12 22:18:02,991 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41059 2023-07-12 22:18:02,991 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41059 2023-07-12 22:18:02,994 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:02,994 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:02,995 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:02,995 INFO [Listener at localhost/40739] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 22:18:02,995 INFO [Listener at localhost/40739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:02,995 INFO [Listener at localhost/40739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:02,996 INFO [Listener at localhost/40739] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:02,996 INFO [Listener at localhost/40739] http.HttpServer(1146): Jetty bound to port 41289 2023-07-12 22:18:02,997 INFO [Listener at localhost/40739] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:03,003 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:03,003 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4124822{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:03,004 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:03,004 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1e5ca88b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:03,131 INFO [Listener at localhost/40739] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:03,132 INFO [Listener at localhost/40739] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:03,133 INFO [Listener at localhost/40739] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:03,133 INFO [Listener at localhost/40739] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 22:18:03,135 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:03,136 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@65f8279a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/java.io.tmpdir/jetty-0_0_0_0-41289-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4532242401991841251/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:03,137 INFO [Listener at localhost/40739] server.AbstractConnector(333): Started ServerConnector@5e7280c7{HTTP/1.1, (http/1.1)}{0.0.0.0:41289} 2023-07-12 22:18:03,137 INFO [Listener at localhost/40739] server.Server(415): Started @8210ms 2023-07-12 22:18:03,156 INFO [Listener at localhost/40739] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:03,156 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:03,157 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:03,157 INFO [Listener at localhost/40739] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:03,157 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:03,157 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:03,158 INFO [Listener at localhost/40739] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:03,160 INFO [Listener at localhost/40739] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44439 2023-07-12 22:18:03,160 INFO [Listener at localhost/40739] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 22:18:03,166 DEBUG [Listener at localhost/40739] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 22:18:03,168 INFO [Listener at localhost/40739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:03,170 INFO [Listener at localhost/40739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:03,172 INFO [Listener at localhost/40739] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44439 connecting to ZooKeeper ensemble=127.0.0.1:59420 2023-07-12 22:18:03,176 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:444390x0, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:03,179 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44439-0x1015b9d43b70003 connected 2023-07-12 22:18:03,179 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(164): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:03,180 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(164): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:03,181 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(164): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:03,188 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44439 2023-07-12 22:18:03,188 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44439 2023-07-12 22:18:03,189 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44439 2023-07-12 22:18:03,195 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44439 2023-07-12 22:18:03,195 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44439 2023-07-12 22:18:03,197 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:03,198 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:03,198 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:03,198 INFO [Listener at localhost/40739] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 22:18:03,198 INFO [Listener at localhost/40739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:03,199 INFO [Listener at localhost/40739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:03,199 INFO [Listener at localhost/40739] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:03,200 INFO [Listener at localhost/40739] http.HttpServer(1146): Jetty bound to port 36729 2023-07-12 22:18:03,200 INFO [Listener at localhost/40739] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:03,211 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:03,212 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@618603ab{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:03,212 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:03,213 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@53ca0225{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:03,344 INFO [Listener at localhost/40739] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:03,345 INFO [Listener at localhost/40739] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:03,345 INFO [Listener at localhost/40739] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:03,346 INFO [Listener at localhost/40739] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 22:18:03,347 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:03,348 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@41710862{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/java.io.tmpdir/jetty-0_0_0_0-36729-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5483732908766379622/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:03,349 INFO [Listener at localhost/40739] server.AbstractConnector(333): Started ServerConnector@43e2a6e5{HTTP/1.1, (http/1.1)}{0.0.0.0:36729} 2023-07-12 22:18:03,349 INFO [Listener at localhost/40739] server.Server(415): Started @8422ms 2023-07-12 22:18:03,356 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:03,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@7c96907d{HTTP/1.1, (http/1.1)}{0.0.0.0:42023} 2023-07-12 22:18:03,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8433ms 2023-07-12 22:18:03,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34283,1689200280641 2023-07-12 22:18:03,372 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 22:18:03,374 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34283,1689200280641 2023-07-12 22:18:03,404 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:03,404 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:03,404 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:03,404 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:03,406 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:03,407 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 22:18:03,408 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34283,1689200280641 from backup master directory 2023-07-12 22:18:03,409 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 22:18:03,413 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34283,1689200280641 2023-07-12 22:18:03,413 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 22:18:03,414 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:03,414 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34283,1689200280641 2023-07-12 22:18:03,417 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-12 22:18:03,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-12 22:18:03,521 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/hbase.id with ID: c038e8fc-22e5-4b4d-81b2-aff8d649274f 2023-07-12 22:18:03,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:03,584 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:03,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7036f813 to 127.0.0.1:59420 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:03,672 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@162ed829, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:03,704 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:03,706 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 22:18:03,745 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-12 22:18:03,746 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-12 22:18:03,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 22:18:03,759 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 22:18:03,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:03,816 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData/data/master/store-tmp 2023-07-12 22:18:03,887 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:03,887 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 22:18:03,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:03,887 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:03,887 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 22:18:03,887 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:03,888 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:03,888 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 22:18:03,890 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData/WALs/jenkins-hbase4.apache.org,34283,1689200280641 2023-07-12 22:18:03,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34283%2C1689200280641, suffix=, logDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData/WALs/jenkins-hbase4.apache.org,34283,1689200280641, archiveDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData/oldWALs, maxLogs=10 2023-07-12 22:18:04,004 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK] 2023-07-12 22:18:04,004 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK] 2023-07-12 22:18:04,007 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK] 2023-07-12 22:18:04,016 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 22:18:04,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData/WALs/jenkins-hbase4.apache.org,34283,1689200280641/jenkins-hbase4.apache.org%2C34283%2C1689200280641.1689200283935 2023-07-12 22:18:04,116 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK], DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK], DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK]] 2023-07-12 22:18:04,117 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:04,117 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:04,122 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:04,126 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:04,245 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:04,279 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 22:18:04,330 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 22:18:04,372 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:04,385 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:04,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:04,415 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:04,441 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:04,442 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11361635680, jitterRate=0.0581347793340683}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:04,442 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 22:18:04,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 22:18:04,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 22:18:04,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 22:18:04,491 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 22:18:04,493 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-12 22:18:04,531 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 37 msec 2023-07-12 22:18:04,531 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 22:18:04,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 22:18:04,574 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 22:18:04,582 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 22:18:04,587 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 22:18:04,592 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 22:18:04,596 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:04,600 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 22:18:04,601 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 22:18:04,659 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 22:18:04,665 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:04,666 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:04,666 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:04,666 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:04,671 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:04,679 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34283,1689200280641, sessionid=0x1015b9d43b70000, setting cluster-up flag (Was=false) 2023-07-12 22:18:04,701 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:04,719 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 22:18:04,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34283,1689200280641 2023-07-12 22:18:04,729 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:04,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 22:18:04,736 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34283,1689200280641 2023-07-12 22:18:04,740 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.hbase-snapshot/.tmp 2023-07-12 22:18:04,785 INFO [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(951): ClusterId : c038e8fc-22e5-4b4d-81b2-aff8d649274f 2023-07-12 22:18:04,801 INFO [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(951): ClusterId : c038e8fc-22e5-4b4d-81b2-aff8d649274f 2023-07-12 22:18:04,827 DEBUG [RS:0;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 22:18:04,828 DEBUG [RS:1;jenkins-hbase4:41059] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 22:18:04,835 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(951): ClusterId : c038e8fc-22e5-4b4d-81b2-aff8d649274f 2023-07-12 22:18:04,837 DEBUG [RS:2;jenkins-hbase4:44439] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 22:18:04,840 DEBUG [RS:1;jenkins-hbase4:41059] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 22:18:04,840 DEBUG [RS:1;jenkins-hbase4:41059] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 22:18:04,840 DEBUG [RS:2;jenkins-hbase4:44439] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 22:18:04,841 DEBUG [RS:2;jenkins-hbase4:44439] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 22:18:04,840 DEBUG [RS:0;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 22:18:04,841 DEBUG [RS:0;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 22:18:04,844 DEBUG [RS:1;jenkins-hbase4:41059] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 22:18:04,845 DEBUG [RS:1;jenkins-hbase4:41059] zookeeper.ReadOnlyZKClient(139): Connect 0x13a61d18 to 127.0.0.1:59420 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:04,846 DEBUG [RS:2;jenkins-hbase4:44439] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 22:18:04,847 DEBUG [RS:2;jenkins-hbase4:44439] zookeeper.ReadOnlyZKClient(139): Connect 0x43e189ab to 127.0.0.1:59420 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:04,847 DEBUG [RS:0;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 22:18:04,849 DEBUG [RS:0;jenkins-hbase4:37441] zookeeper.ReadOnlyZKClient(139): Connect 0x5c7f7420 to 127.0.0.1:59420 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:04,856 DEBUG [RS:1;jenkins-hbase4:41059] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5db7feb8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:04,857 DEBUG [RS:1;jenkins-hbase4:41059] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b6ac95e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:04,859 DEBUG [RS:2;jenkins-hbase4:44439] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d267fa7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:04,859 DEBUG [RS:2;jenkins-hbase4:44439] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4919e2db, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:04,863 DEBUG [RS:0;jenkins-hbase4:37441] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@584e8ab, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:04,863 DEBUG [RS:0;jenkins-hbase4:37441] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78764e9e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:04,882 DEBUG [RS:0;jenkins-hbase4:37441] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37441 2023-07-12 22:18:04,884 DEBUG [RS:1;jenkins-hbase4:41059] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:41059 2023-07-12 22:18:04,884 DEBUG [RS:2;jenkins-hbase4:44439] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:44439 2023-07-12 22:18:04,890 INFO [RS:0;jenkins-hbase4:37441] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 22:18:04,890 INFO [RS:1;jenkins-hbase4:41059] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 22:18:04,891 INFO [RS:1;jenkins-hbase4:41059] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 22:18:04,890 INFO [RS:2;jenkins-hbase4:44439] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 22:18:04,891 INFO [RS:2;jenkins-hbase4:44439] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 22:18:04,891 DEBUG [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 22:18:04,891 INFO [RS:0;jenkins-hbase4:37441] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 22:18:04,891 DEBUG [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 22:18:04,891 DEBUG [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 22:18:04,895 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34283,1689200280641 with isa=jenkins-hbase4.apache.org/172.31.14.131:44439, startcode=1689200283155 2023-07-12 22:18:04,895 INFO [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34283,1689200280641 with isa=jenkins-hbase4.apache.org/172.31.14.131:37441, startcode=1689200282765 2023-07-12 22:18:04,895 INFO [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34283,1689200280641 with isa=jenkins-hbase4.apache.org/172.31.14.131:41059, startcode=1689200282965 2023-07-12 22:18:04,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 22:18:04,920 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 22:18:04,921 DEBUG [RS:0;jenkins-hbase4:37441] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 22:18:04,921 DEBUG [RS:2;jenkins-hbase4:44439] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 22:18:04,921 DEBUG [RS:1;jenkins-hbase4:41059] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 22:18:04,924 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:04,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 22:18:04,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 22:18:05,012 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42127, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 22:18:05,012 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48485, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 22:18:05,012 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55735, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 22:18:05,062 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:05,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 22:18:05,082 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:05,084 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:05,104 DEBUG [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 22:18:05,104 DEBUG [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 22:18:05,104 WARN [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 22:18:05,104 DEBUG [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 22:18:05,104 WARN [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 22:18:05,104 WARN [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 22:18:05,124 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 22:18:05,129 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 22:18:05,130 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 22:18:05,130 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 22:18:05,132 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 22:18:05,132 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 22:18:05,132 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 22:18:05,132 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 22:18:05,132 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-12 22:18:05,133 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,133 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:05,133 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689200315151 2023-07-12 22:18:05,154 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 22:18:05,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 22:18:05,161 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 22:18:05,162 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 22:18:05,165 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:05,168 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 22:18:05,168 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 22:18:05,169 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 22:18:05,169 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 22:18:05,189 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,193 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 22:18:05,195 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 22:18:05,195 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 22:18:05,199 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 22:18:05,199 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 22:18:05,201 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689200285201,5,FailOnTimeoutGroup] 2023-07-12 22:18:05,201 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689200285201,5,FailOnTimeoutGroup] 2023-07-12 22:18:05,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 22:18:05,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,211 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34283,1689200280641 with isa=jenkins-hbase4.apache.org/172.31.14.131:44439, startcode=1689200283155 2023-07-12 22:18:05,211 INFO [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34283,1689200280641 with isa=jenkins-hbase4.apache.org/172.31.14.131:41059, startcode=1689200282965 2023-07-12 22:18:05,211 INFO [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34283,1689200280641 with isa=jenkins-hbase4.apache.org/172.31.14.131:37441, startcode=1689200282765 2023-07-12 22:18:05,219 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34283] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:05,221 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:05,224 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 22:18:05,227 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34283] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:05,227 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:05,227 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 22:18:05,228 DEBUG [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105 2023-07-12 22:18:05,229 DEBUG [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40075 2023-07-12 22:18:05,229 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34283] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:05,229 DEBUG [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36825 2023-07-12 22:18:05,229 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:05,229 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 22:18:05,229 DEBUG [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105 2023-07-12 22:18:05,230 DEBUG [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40075 2023-07-12 22:18:05,230 DEBUG [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36825 2023-07-12 22:18:05,230 DEBUG [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105 2023-07-12 22:18:05,230 DEBUG [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40075 2023-07-12 22:18:05,231 DEBUG [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36825 2023-07-12 22:18:05,238 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:05,239 DEBUG [RS:1;jenkins-hbase4:41059] zookeeper.ZKUtil(162): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:05,239 DEBUG [RS:0;jenkins-hbase4:37441] zookeeper.ZKUtil(162): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:05,240 WARN [RS:1;jenkins-hbase4:41059] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:05,240 WARN [RS:0;jenkins-hbase4:37441] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:05,240 INFO [RS:1;jenkins-hbase4:41059] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:05,240 INFO [RS:0;jenkins-hbase4:37441] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:05,240 DEBUG [RS:2;jenkins-hbase4:44439] zookeeper.ZKUtil(162): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:05,241 DEBUG [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:05,241 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37441,1689200282765] 2023-07-12 22:18:05,241 DEBUG [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:05,241 WARN [RS:2;jenkins-hbase4:44439] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:05,241 INFO [RS:2;jenkins-hbase4:44439] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:05,241 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41059,1689200282965] 2023-07-12 22:18:05,241 DEBUG [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:05,241 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44439,1689200283155] 2023-07-12 22:18:05,262 DEBUG [RS:2;jenkins-hbase4:44439] zookeeper.ZKUtil(162): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:05,262 DEBUG [RS:2;jenkins-hbase4:44439] zookeeper.ZKUtil(162): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:05,266 DEBUG [RS:2;jenkins-hbase4:44439] zookeeper.ZKUtil(162): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:05,281 DEBUG [RS:2;jenkins-hbase4:44439] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 22:18:05,283 DEBUG [RS:0;jenkins-hbase4:37441] zookeeper.ZKUtil(162): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:05,283 DEBUG [RS:1;jenkins-hbase4:41059] zookeeper.ZKUtil(162): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:05,283 DEBUG [RS:0;jenkins-hbase4:37441] zookeeper.ZKUtil(162): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:05,284 DEBUG [RS:1;jenkins-hbase4:41059] zookeeper.ZKUtil(162): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:05,284 DEBUG [RS:0;jenkins-hbase4:37441] zookeeper.ZKUtil(162): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:05,285 DEBUG [RS:1;jenkins-hbase4:41059] zookeeper.ZKUtil(162): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:05,294 INFO [RS:2;jenkins-hbase4:44439] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 22:18:05,291 DEBUG [RS:0;jenkins-hbase4:37441] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 22:18:05,291 DEBUG [RS:1;jenkins-hbase4:41059] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 22:18:05,305 INFO [RS:1;jenkins-hbase4:41059] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 22:18:05,306 INFO [RS:0;jenkins-hbase4:37441] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 22:18:05,306 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:05,307 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:05,307 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105 2023-07-12 22:18:05,322 INFO [RS:1;jenkins-hbase4:41059] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 22:18:05,323 INFO [RS:0;jenkins-hbase4:37441] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 22:18:05,328 INFO [RS:2;jenkins-hbase4:44439] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 22:18:05,338 INFO [RS:1;jenkins-hbase4:41059] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 22:18:05,338 INFO [RS:0;jenkins-hbase4:37441] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 22:18:05,338 INFO [RS:2;jenkins-hbase4:44439] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 22:18:05,340 INFO [RS:0;jenkins-hbase4:37441] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,339 INFO [RS:1;jenkins-hbase4:41059] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,340 INFO [RS:2;jenkins-hbase4:44439] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,341 INFO [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 22:18:05,350 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 22:18:05,358 INFO [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 22:18:05,371 INFO [RS:0;jenkins-hbase4:37441] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,371 INFO [RS:1;jenkins-hbase4:41059] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,371 INFO [RS:2;jenkins-hbase4:44439] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,376 DEBUG [RS:1;jenkins-hbase4:41059] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,374 DEBUG [RS:0;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,377 DEBUG [RS:2;jenkins-hbase4:44439] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,377 DEBUG [RS:0;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,377 DEBUG [RS:2;jenkins-hbase4:44439] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,377 DEBUG [RS:0;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,377 DEBUG [RS:1;jenkins-hbase4:41059] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,377 DEBUG [RS:0;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,377 DEBUG [RS:1;jenkins-hbase4:41059] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,378 DEBUG [RS:0;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,378 DEBUG [RS:1;jenkins-hbase4:41059] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,378 DEBUG [RS:0;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:05,378 DEBUG [RS:1;jenkins-hbase4:41059] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,378 DEBUG [RS:0;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,378 DEBUG [RS:1;jenkins-hbase4:41059] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:05,377 DEBUG [RS:2;jenkins-hbase4:44439] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,379 DEBUG [RS:1;jenkins-hbase4:41059] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,379 DEBUG [RS:1;jenkins-hbase4:41059] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,379 DEBUG [RS:2;jenkins-hbase4:44439] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,379 DEBUG [RS:0;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,380 DEBUG [RS:2;jenkins-hbase4:44439] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,380 DEBUG [RS:0;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,380 DEBUG [RS:2;jenkins-hbase4:44439] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:05,380 DEBUG [RS:0;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,380 DEBUG [RS:2;jenkins-hbase4:44439] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,380 DEBUG [RS:2;jenkins-hbase4:44439] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,380 DEBUG [RS:2;jenkins-hbase4:44439] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,379 DEBUG [RS:1;jenkins-hbase4:41059] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,380 DEBUG [RS:2;jenkins-hbase4:44439] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,381 DEBUG [RS:1;jenkins-hbase4:41059] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:05,394 INFO [RS:0;jenkins-hbase4:37441] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,395 INFO [RS:0;jenkins-hbase4:37441] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,395 INFO [RS:0;jenkins-hbase4:37441] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,396 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:05,414 INFO [RS:1;jenkins-hbase4:41059] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,415 INFO [RS:1;jenkins-hbase4:41059] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,415 INFO [RS:1;jenkins-hbase4:41059] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,421 INFO [RS:0;jenkins-hbase4:37441] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 22:18:05,426 INFO [RS:0;jenkins-hbase4:37441] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37441,1689200282765-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,431 INFO [RS:2;jenkins-hbase4:44439] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,431 INFO [RS:2;jenkins-hbase4:44439] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,432 INFO [RS:2;jenkins-hbase4:44439] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,434 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 22:18:05,437 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/info 2023-07-12 22:18:05,438 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 22:18:05,443 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:05,444 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 22:18:05,447 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/rep_barrier 2023-07-12 22:18:05,447 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 22:18:05,448 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:05,448 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 22:18:05,448 INFO [RS:1;jenkins-hbase4:41059] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 22:18:05,450 INFO [RS:1;jenkins-hbase4:41059] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41059,1689200282965-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,453 INFO [RS:2;jenkins-hbase4:44439] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 22:18:05,453 INFO [RS:2;jenkins-hbase4:44439] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44439,1689200283155-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:05,462 INFO [RS:0;jenkins-hbase4:37441] regionserver.Replication(203): jenkins-hbase4.apache.org,37441,1689200282765 started 2023-07-12 22:18:05,463 INFO [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37441,1689200282765, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37441, sessionid=0x1015b9d43b70001 2023-07-12 22:18:05,463 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/table 2023-07-12 22:18:05,463 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 22:18:05,464 DEBUG [RS:0;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 22:18:05,464 INFO [RS:1;jenkins-hbase4:41059] regionserver.Replication(203): jenkins-hbase4.apache.org,41059,1689200282965 started 2023-07-12 22:18:05,465 INFO [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41059,1689200282965, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41059, sessionid=0x1015b9d43b70002 2023-07-12 22:18:05,465 DEBUG [RS:0;jenkins-hbase4:37441] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:05,465 DEBUG [RS:0;jenkins-hbase4:37441] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37441,1689200282765' 2023-07-12 22:18:05,465 DEBUG [RS:0;jenkins-hbase4:37441] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 22:18:05,465 DEBUG [RS:1;jenkins-hbase4:41059] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 22:18:05,465 DEBUG [RS:1;jenkins-hbase4:41059] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:05,466 DEBUG [RS:1;jenkins-hbase4:41059] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41059,1689200282965' 2023-07-12 22:18:05,467 DEBUG [RS:1;jenkins-hbase4:41059] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 22:18:05,467 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:05,468 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740 2023-07-12 22:18:05,469 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740 2023-07-12 22:18:05,470 DEBUG [RS:0;jenkins-hbase4:37441] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 22:18:05,471 DEBUG [RS:1;jenkins-hbase4:41059] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 22:18:05,471 DEBUG [RS:0;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 22:18:05,471 DEBUG [RS:0;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 22:18:05,471 DEBUG [RS:0;jenkins-hbase4:37441] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:05,471 DEBUG [RS:0;jenkins-hbase4:37441] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37441,1689200282765' 2023-07-12 22:18:05,471 DEBUG [RS:1;jenkins-hbase4:41059] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 22:18:05,471 DEBUG [RS:0;jenkins-hbase4:37441] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 22:18:05,471 DEBUG [RS:1;jenkins-hbase4:41059] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 22:18:05,472 DEBUG [RS:1;jenkins-hbase4:41059] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:05,472 DEBUG [RS:1;jenkins-hbase4:41059] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41059,1689200282965' 2023-07-12 22:18:05,472 DEBUG [RS:1;jenkins-hbase4:41059] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 22:18:05,473 DEBUG [RS:1;jenkins-hbase4:41059] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 22:18:05,474 DEBUG [RS:1;jenkins-hbase4:41059] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 22:18:05,474 DEBUG [RS:0;jenkins-hbase4:37441] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 22:18:05,474 INFO [RS:1;jenkins-hbase4:41059] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 22:18:05,474 INFO [RS:1;jenkins-hbase4:41059] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 22:18:05,475 DEBUG [RS:0;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 22:18:05,476 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 22:18:05,475 INFO [RS:0;jenkins-hbase4:37441] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 22:18:05,478 INFO [RS:2;jenkins-hbase4:44439] regionserver.Replication(203): jenkins-hbase4.apache.org,44439,1689200283155 started 2023-07-12 22:18:05,478 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44439,1689200283155, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44439, sessionid=0x1015b9d43b70003 2023-07-12 22:18:05,478 DEBUG [RS:2;jenkins-hbase4:44439] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 22:18:05,478 DEBUG [RS:2;jenkins-hbase4:44439] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:05,478 DEBUG [RS:2;jenkins-hbase4:44439] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44439,1689200283155' 2023-07-12 22:18:05,478 DEBUG [RS:2;jenkins-hbase4:44439] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 22:18:05,477 INFO [RS:0;jenkins-hbase4:37441] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 22:18:05,479 DEBUG [RS:2;jenkins-hbase4:44439] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 22:18:05,479 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 22:18:05,480 DEBUG [RS:2;jenkins-hbase4:44439] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 22:18:05,480 DEBUG [RS:2;jenkins-hbase4:44439] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 22:18:05,480 DEBUG [RS:2;jenkins-hbase4:44439] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:05,480 DEBUG [RS:2;jenkins-hbase4:44439] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44439,1689200283155' 2023-07-12 22:18:05,480 DEBUG [RS:2;jenkins-hbase4:44439] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 22:18:05,481 DEBUG [RS:2;jenkins-hbase4:44439] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 22:18:05,481 DEBUG [RS:2;jenkins-hbase4:44439] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 22:18:05,481 INFO [RS:2;jenkins-hbase4:44439] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 22:18:05,481 INFO [RS:2;jenkins-hbase4:44439] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 22:18:05,488 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:05,489 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10226329440, jitterRate=-0.04759885370731354}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 22:18:05,489 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 22:18:05,489 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 22:18:05,489 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 22:18:05,489 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 22:18:05,489 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 22:18:05,489 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 22:18:05,490 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 22:18:05,490 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 22:18:05,497 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 22:18:05,497 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 22:18:05,506 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 22:18:05,521 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 22:18:05,525 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 22:18:05,586 INFO [RS:2;jenkins-hbase4:44439] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44439%2C1689200283155, suffix=, logDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,44439,1689200283155, archiveDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/oldWALs, maxLogs=32 2023-07-12 22:18:05,587 INFO [RS:1;jenkins-hbase4:41059] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41059%2C1689200282965, suffix=, logDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,41059,1689200282965, archiveDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/oldWALs, maxLogs=32 2023-07-12 22:18:05,586 INFO [RS:0;jenkins-hbase4:37441] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37441%2C1689200282765, suffix=, logDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,37441,1689200282765, archiveDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/oldWALs, maxLogs=32 2023-07-12 22:18:05,611 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK] 2023-07-12 22:18:05,611 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK] 2023-07-12 22:18:05,611 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK] 2023-07-12 22:18:05,623 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK] 2023-07-12 22:18:05,623 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK] 2023-07-12 22:18:05,623 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK] 2023-07-12 22:18:05,627 INFO [RS:2;jenkins-hbase4:44439] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,44439,1689200283155/jenkins-hbase4.apache.org%2C44439%2C1689200283155.1689200285592 2023-07-12 22:18:05,628 DEBUG [RS:2;jenkins-hbase4:44439] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK], DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK], DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK]] 2023-07-12 22:18:05,628 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK] 2023-07-12 22:18:05,628 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK] 2023-07-12 22:18:05,629 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK] 2023-07-12 22:18:05,640 INFO [RS:1;jenkins-hbase4:41059] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,41059,1689200282965/jenkins-hbase4.apache.org%2C41059%2C1689200282965.1689200285593 2023-07-12 22:18:05,641 DEBUG [RS:1;jenkins-hbase4:41059] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK], DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK], DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK]] 2023-07-12 22:18:05,645 INFO [RS:0;jenkins-hbase4:37441] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,37441,1689200282765/jenkins-hbase4.apache.org%2C37441%2C1689200282765.1689200285593 2023-07-12 22:18:05,648 DEBUG [RS:0;jenkins-hbase4:37441] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK], DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK], DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK]] 2023-07-12 22:18:05,676 DEBUG [jenkins-hbase4:34283] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 22:18:05,694 DEBUG [jenkins-hbase4:34283] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:05,697 DEBUG [jenkins-hbase4:34283] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:05,697 DEBUG [jenkins-hbase4:34283] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:05,697 DEBUG [jenkins-hbase4:34283] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:05,697 DEBUG [jenkins-hbase4:34283] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:05,701 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37441,1689200282765, state=OPENING 2023-07-12 22:18:05,708 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 22:18:05,710 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:05,710 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 22:18:05,714 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:05,733 WARN [ReadOnlyZKClient-127.0.0.1:59420@0x7036f813] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 22:18:05,757 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34283,1689200280641] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:05,762 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49472, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:05,765 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37441] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:49472 deadline: 1689200345763, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:05,891 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:05,896 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:05,901 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49476, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:05,911 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 22:18:05,912 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:05,916 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37441%2C1689200282765.meta, suffix=.meta, logDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,37441,1689200282765, archiveDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/oldWALs, maxLogs=32 2023-07-12 22:18:05,941 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK] 2023-07-12 22:18:05,941 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK] 2023-07-12 22:18:05,952 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK] 2023-07-12 22:18:05,964 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,37441,1689200282765/jenkins-hbase4.apache.org%2C37441%2C1689200282765.meta.1689200285918.meta 2023-07-12 22:18:05,964 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK], DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK], DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK]] 2023-07-12 22:18:05,965 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:05,966 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 22:18:05,973 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 22:18:05,978 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 22:18:05,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 22:18:05,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:05,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 22:18:05,984 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 22:18:05,987 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 22:18:05,989 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/info 2023-07-12 22:18:05,989 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/info 2023-07-12 22:18:05,989 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 22:18:05,990 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:05,990 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 22:18:05,992 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/rep_barrier 2023-07-12 22:18:05,992 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/rep_barrier 2023-07-12 22:18:05,992 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 22:18:05,993 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:05,993 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 22:18:05,995 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/table 2023-07-12 22:18:05,995 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/table 2023-07-12 22:18:05,995 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 22:18:05,996 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:05,997 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740 2023-07-12 22:18:06,000 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740 2023-07-12 22:18:06,003 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 22:18:06,006 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 22:18:06,007 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10803969440, jitterRate=0.006198063492774963}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 22:18:06,008 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 22:18:06,021 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689200285888 2023-07-12 22:18:06,041 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 22:18:06,043 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 22:18:06,043 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37441,1689200282765, state=OPEN 2023-07-12 22:18:06,047 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 22:18:06,047 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 22:18:06,051 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 22:18:06,051 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37441,1689200282765 in 333 msec 2023-07-12 22:18:06,057 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 22:18:06,057 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 547 msec 2023-07-12 22:18:06,063 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.1260 sec 2023-07-12 22:18:06,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689200286063, completionTime=-1 2023-07-12 22:18:06,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 22:18:06,064 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 22:18:06,126 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 22:18:06,126 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689200346126 2023-07-12 22:18:06,126 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689200406126 2023-07-12 22:18:06,127 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 62 msec 2023-07-12 22:18:06,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34283,1689200280641-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:06,144 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34283,1689200280641-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:06,144 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34283,1689200280641-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:06,146 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34283, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:06,146 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:06,152 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 22:18:06,162 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 22:18:06,164 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:06,175 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 22:18:06,178 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:06,181 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:06,198 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/hbase/namespace/5dadbef7ea97919927df58525570971d 2023-07-12 22:18:06,200 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/hbase/namespace/5dadbef7ea97919927df58525570971d empty. 2023-07-12 22:18:06,201 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/hbase/namespace/5dadbef7ea97919927df58525570971d 2023-07-12 22:18:06,201 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 22:18:06,237 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:06,239 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5dadbef7ea97919927df58525570971d, NAME => 'hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:06,254 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:06,254 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 5dadbef7ea97919927df58525570971d, disabling compactions & flushes 2023-07-12 22:18:06,254 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:06,254 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:06,254 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. after waiting 0 ms 2023-07-12 22:18:06,254 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:06,254 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:06,254 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 5dadbef7ea97919927df58525570971d: 2023-07-12 22:18:06,258 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:06,274 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200286261"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200286261"}]},"ts":"1689200286261"} 2023-07-12 22:18:06,280 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34283,1689200280641] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:06,282 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34283,1689200280641] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 22:18:06,285 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:06,288 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:06,291 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:06,292 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2 empty. 2023-07-12 22:18:06,293 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:06,293 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 22:18:06,320 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:06,323 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:06,330 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200286323"}]},"ts":"1689200286323"} 2023-07-12 22:18:06,334 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:06,336 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 014ce980d8eb773efb72cff5eb62d9a2, NAME => 'hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:06,338 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 22:18:06,343 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:06,343 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:06,343 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:06,343 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:06,343 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:06,345 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5dadbef7ea97919927df58525570971d, ASSIGN}] 2023-07-12 22:18:06,351 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5dadbef7ea97919927df58525570971d, ASSIGN 2023-07-12 22:18:06,353 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=5dadbef7ea97919927df58525570971d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41059,1689200282965; forceNewPlan=false, retain=false 2023-07-12 22:18:06,504 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:06,506 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=5dadbef7ea97919927df58525570971d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:06,506 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200286506"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200286506"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200286506"}]},"ts":"1689200286506"} 2023-07-12 22:18:06,509 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE; OpenRegionProcedure 5dadbef7ea97919927df58525570971d, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:06,665 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:06,665 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:06,669 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56092, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:06,687 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:06,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5dadbef7ea97919927df58525570971d, NAME => 'hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:06,688 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:06,688 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:06,688 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:06,688 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:06,703 INFO [StoreOpener-5dadbef7ea97919927df58525570971d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:06,706 DEBUG [StoreOpener-5dadbef7ea97919927df58525570971d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/info 2023-07-12 22:18:06,706 DEBUG [StoreOpener-5dadbef7ea97919927df58525570971d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/info 2023-07-12 22:18:06,707 INFO [StoreOpener-5dadbef7ea97919927df58525570971d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5dadbef7ea97919927df58525570971d columnFamilyName info 2023-07-12 22:18:06,708 INFO [StoreOpener-5dadbef7ea97919927df58525570971d-1] regionserver.HStore(310): Store=5dadbef7ea97919927df58525570971d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:06,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d 2023-07-12 22:18:06,710 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d 2023-07-12 22:18:06,715 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:06,719 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:06,720 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5dadbef7ea97919927df58525570971d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9640326880, jitterRate=-0.10217459499835968}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:06,720 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5dadbef7ea97919927df58525570971d: 2023-07-12 22:18:06,722 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d., pid=7, masterSystemTime=1689200286665 2023-07-12 22:18:06,727 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:06,728 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:06,730 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=5dadbef7ea97919927df58525570971d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:06,730 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200286729"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200286729"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200286729"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200286729"}]},"ts":"1689200286729"} 2023-07-12 22:18:06,737 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-12 22:18:06,737 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; OpenRegionProcedure 5dadbef7ea97919927df58525570971d, server=jenkins-hbase4.apache.org,41059,1689200282965 in 224 msec 2023-07-12 22:18:06,741 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-12 22:18:06,742 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=5dadbef7ea97919927df58525570971d, ASSIGN in 392 msec 2023-07-12 22:18:06,743 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:06,744 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200286743"}]},"ts":"1689200286743"} 2023-07-12 22:18:06,746 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 22:18:06,750 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:06,753 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 585 msec 2023-07-12 22:18:06,769 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:06,769 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 014ce980d8eb773efb72cff5eb62d9a2, disabling compactions & flushes 2023-07-12 22:18:06,769 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:06,769 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:06,769 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. after waiting 0 ms 2023-07-12 22:18:06,769 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:06,769 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:06,769 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 014ce980d8eb773efb72cff5eb62d9a2: 2023-07-12 22:18:06,772 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:06,774 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689200286773"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200286773"}]},"ts":"1689200286773"} 2023-07-12 22:18:06,776 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:06,778 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:06,778 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200286778"}]},"ts":"1689200286778"} 2023-07-12 22:18:06,780 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 22:18:06,782 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 22:18:06,784 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:06,784 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:06,784 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:06,784 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:06,784 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:06,785 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=014ce980d8eb773efb72cff5eb62d9a2, ASSIGN}] 2023-07-12 22:18:06,786 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 22:18:06,786 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:06,788 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=014ce980d8eb773efb72cff5eb62d9a2, ASSIGN 2023-07-12 22:18:06,790 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=014ce980d8eb773efb72cff5eb62d9a2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37441,1689200282765; forceNewPlan=false, retain=false 2023-07-12 22:18:06,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:06,816 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56094, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:06,834 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 22:18:06,850 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 22:18:06,858 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 35 msec 2023-07-12 22:18:06,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 22:18:06,872 DEBUG [PEWorker-5] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-12 22:18:06,872 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 22:18:06,941 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:06,942 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=014ce980d8eb773efb72cff5eb62d9a2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:06,943 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689200286942"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200286942"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200286942"}]},"ts":"1689200286942"} 2023-07-12 22:18:06,946 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=8, state=RUNNABLE; OpenRegionProcedure 014ce980d8eb773efb72cff5eb62d9a2, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:07,105 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:07,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 014ce980d8eb773efb72cff5eb62d9a2, NAME => 'hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:07,106 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 22:18:07,106 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. service=MultiRowMutationService 2023-07-12 22:18:07,107 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 22:18:07,107 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:07,107 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:07,108 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:07,108 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:07,109 INFO [StoreOpener-014ce980d8eb773efb72cff5eb62d9a2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:07,111 DEBUG [StoreOpener-014ce980d8eb773efb72cff5eb62d9a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/m 2023-07-12 22:18:07,112 DEBUG [StoreOpener-014ce980d8eb773efb72cff5eb62d9a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/m 2023-07-12 22:18:07,112 INFO [StoreOpener-014ce980d8eb773efb72cff5eb62d9a2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 014ce980d8eb773efb72cff5eb62d9a2 columnFamilyName m 2023-07-12 22:18:07,113 INFO [StoreOpener-014ce980d8eb773efb72cff5eb62d9a2-1] regionserver.HStore(310): Store=014ce980d8eb773efb72cff5eb62d9a2/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:07,114 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:07,116 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:07,121 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:07,124 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:07,125 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 014ce980d8eb773efb72cff5eb62d9a2; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7615cd8c, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:07,125 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 014ce980d8eb773efb72cff5eb62d9a2: 2023-07-12 22:18:07,127 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2., pid=11, masterSystemTime=1689200287100 2023-07-12 22:18:07,132 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=014ce980d8eb773efb72cff5eb62d9a2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:07,132 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:07,132 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:07,133 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689200287132"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200287132"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200287132"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200287132"}]},"ts":"1689200287132"} 2023-07-12 22:18:07,139 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=8 2023-07-12 22:18:07,139 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=8, state=SUCCESS; OpenRegionProcedure 014ce980d8eb773efb72cff5eb62d9a2, server=jenkins-hbase4.apache.org,37441,1689200282765 in 189 msec 2023-07-12 22:18:07,142 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-12 22:18:07,145 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=014ce980d8eb773efb72cff5eb62d9a2, ASSIGN in 354 msec 2023-07-12 22:18:07,157 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 22:18:07,175 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 299 msec 2023-07-12 22:18:07,176 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:07,176 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200287176"}]},"ts":"1689200287176"} 2023-07-12 22:18:07,179 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 22:18:07,182 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:07,184 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 902 msec 2023-07-12 22:18:07,189 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 22:18:07,192 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 22:18:07,192 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.778sec 2023-07-12 22:18:07,195 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 22:18:07,196 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 22:18:07,196 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 22:18:07,198 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34283,1689200280641-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 22:18:07,198 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34283,1689200280641-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 22:18:07,206 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 22:18:07,230 DEBUG [Listener at localhost/40739] zookeeper.ReadOnlyZKClient(139): Connect 0x3041a83e to 127.0.0.1:59420 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:07,237 DEBUG [Listener at localhost/40739] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2081f94e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:07,253 DEBUG [hconnection-0x4b822857-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:07,266 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49492, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:07,280 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34283,1689200280641 2023-07-12 22:18:07,282 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:07,292 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 22:18:07,293 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 22:18:07,361 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:07,361 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:07,364 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 22:18:07,371 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 22:18:07,392 DEBUG [Listener at localhost/40739] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 22:18:07,407 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45482, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 22:18:07,426 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 22:18:07,426 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:07,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-12 22:18:07,433 DEBUG [Listener at localhost/40739] zookeeper.ReadOnlyZKClient(139): Connect 0x3b21cb93 to 127.0.0.1:59420 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:07,440 DEBUG [Listener at localhost/40739] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2db71259, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:07,441 INFO [Listener at localhost/40739] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59420 2023-07-12 22:18:07,450 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:07,454 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015b9d43b7000a connected 2023-07-12 22:18:07,481 INFO [Listener at localhost/40739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=425, OpenFileDescriptor=677, MaxFileDescriptor=60000, SystemLoadAverage=375, ProcessCount=176, AvailableMemoryMB=4722 2023-07-12 22:18:07,484 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-12 22:18:07,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:07,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:07,558 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 22:18:07,570 INFO [Listener at localhost/40739] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:07,571 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:07,571 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:07,571 INFO [Listener at localhost/40739] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:07,571 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:07,571 INFO [Listener at localhost/40739] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:07,571 INFO [Listener at localhost/40739] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:07,575 INFO [Listener at localhost/40739] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41907 2023-07-12 22:18:07,575 INFO [Listener at localhost/40739] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 22:18:07,586 DEBUG [Listener at localhost/40739] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 22:18:07,587 INFO [Listener at localhost/40739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:07,591 INFO [Listener at localhost/40739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:07,595 INFO [Listener at localhost/40739] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41907 connecting to ZooKeeper ensemble=127.0.0.1:59420 2023-07-12 22:18:07,598 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:419070x0, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:07,600 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(162): regionserver:419070x0, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 22:18:07,600 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41907-0x1015b9d43b7000b connected 2023-07-12 22:18:07,601 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(162): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 22:18:07,602 DEBUG [Listener at localhost/40739] zookeeper.ZKUtil(164): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:07,604 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41907 2023-07-12 22:18:07,604 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41907 2023-07-12 22:18:07,606 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41907 2023-07-12 22:18:07,610 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41907 2023-07-12 22:18:07,612 DEBUG [Listener at localhost/40739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41907 2023-07-12 22:18:07,614 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:07,614 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:07,614 INFO [Listener at localhost/40739] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:07,614 INFO [Listener at localhost/40739] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 22:18:07,615 INFO [Listener at localhost/40739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:07,615 INFO [Listener at localhost/40739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:07,615 INFO [Listener at localhost/40739] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:07,615 INFO [Listener at localhost/40739] http.HttpServer(1146): Jetty bound to port 33587 2023-07-12 22:18:07,616 INFO [Listener at localhost/40739] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:07,621 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:07,621 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@700b9517{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:07,621 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:07,621 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@84f5bbc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:07,751 INFO [Listener at localhost/40739] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:07,752 INFO [Listener at localhost/40739] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:07,752 INFO [Listener at localhost/40739] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:07,752 INFO [Listener at localhost/40739] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 22:18:07,754 INFO [Listener at localhost/40739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:07,755 INFO [Listener at localhost/40739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@57235bf5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/java.io.tmpdir/jetty-0_0_0_0-33587-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5348005237770319146/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:07,758 INFO [Listener at localhost/40739] server.AbstractConnector(333): Started ServerConnector@4f2d7206{HTTP/1.1, (http/1.1)}{0.0.0.0:33587} 2023-07-12 22:18:07,758 INFO [Listener at localhost/40739] server.Server(415): Started @12831ms 2023-07-12 22:18:07,761 INFO [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(951): ClusterId : c038e8fc-22e5-4b4d-81b2-aff8d649274f 2023-07-12 22:18:07,761 DEBUG [RS:3;jenkins-hbase4:41907] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 22:18:07,764 DEBUG [RS:3;jenkins-hbase4:41907] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 22:18:07,764 DEBUG [RS:3;jenkins-hbase4:41907] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 22:18:07,766 DEBUG [RS:3;jenkins-hbase4:41907] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 22:18:07,767 DEBUG [RS:3;jenkins-hbase4:41907] zookeeper.ReadOnlyZKClient(139): Connect 0x4ffe55ff to 127.0.0.1:59420 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:07,772 DEBUG [RS:3;jenkins-hbase4:41907] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4139a270, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:07,772 DEBUG [RS:3;jenkins-hbase4:41907] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45063e8f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:07,781 DEBUG [RS:3;jenkins-hbase4:41907] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:41907 2023-07-12 22:18:07,781 INFO [RS:3;jenkins-hbase4:41907] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 22:18:07,781 INFO [RS:3;jenkins-hbase4:41907] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 22:18:07,781 DEBUG [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 22:18:07,782 INFO [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34283,1689200280641 with isa=jenkins-hbase4.apache.org/172.31.14.131:41907, startcode=1689200287570 2023-07-12 22:18:07,782 DEBUG [RS:3;jenkins-hbase4:41907] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 22:18:07,785 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51647, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 22:18:07,785 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34283] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:07,785 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:07,786 DEBUG [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105 2023-07-12 22:18:07,786 DEBUG [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40075 2023-07-12 22:18:07,786 DEBUG [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36825 2023-07-12 22:18:07,790 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:07,790 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:07,790 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:07,790 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:07,790 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:07,791 DEBUG [RS:3;jenkins-hbase4:41907] zookeeper.ZKUtil(162): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:07,791 WARN [RS:3;jenkins-hbase4:41907] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:07,791 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 22:18:07,791 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41907,1689200287570] 2023-07-12 22:18:07,791 INFO [RS:3;jenkins-hbase4:41907] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:07,791 DEBUG [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:07,791 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:07,791 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:07,795 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:07,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:07,798 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34283,1689200280641] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 22:18:07,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:07,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:07,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:07,799 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:07,799 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:07,799 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:07,799 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:07,801 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:07,802 DEBUG [RS:3;jenkins-hbase4:41907] zookeeper.ZKUtil(162): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:07,803 DEBUG [RS:3;jenkins-hbase4:41907] zookeeper.ZKUtil(162): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:07,803 DEBUG [RS:3;jenkins-hbase4:41907] zookeeper.ZKUtil(162): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:07,804 DEBUG [RS:3;jenkins-hbase4:41907] zookeeper.ZKUtil(162): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:07,805 DEBUG [RS:3;jenkins-hbase4:41907] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 22:18:07,806 INFO [RS:3;jenkins-hbase4:41907] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 22:18:07,809 INFO [RS:3;jenkins-hbase4:41907] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 22:18:07,817 INFO [RS:3;jenkins-hbase4:41907] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 22:18:07,817 INFO [RS:3;jenkins-hbase4:41907] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:07,817 INFO [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 22:18:07,819 INFO [RS:3;jenkins-hbase4:41907] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:07,819 DEBUG [RS:3;jenkins-hbase4:41907] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:07,819 DEBUG [RS:3;jenkins-hbase4:41907] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:07,819 DEBUG [RS:3;jenkins-hbase4:41907] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:07,819 DEBUG [RS:3;jenkins-hbase4:41907] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:07,819 DEBUG [RS:3;jenkins-hbase4:41907] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:07,819 DEBUG [RS:3;jenkins-hbase4:41907] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:07,819 DEBUG [RS:3;jenkins-hbase4:41907] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:07,819 DEBUG [RS:3;jenkins-hbase4:41907] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:07,819 DEBUG [RS:3;jenkins-hbase4:41907] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:07,820 DEBUG [RS:3;jenkins-hbase4:41907] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:07,822 INFO [RS:3;jenkins-hbase4:41907] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:07,822 INFO [RS:3;jenkins-hbase4:41907] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:07,822 INFO [RS:3;jenkins-hbase4:41907] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:07,835 INFO [RS:3;jenkins-hbase4:41907] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 22:18:07,835 INFO [RS:3;jenkins-hbase4:41907] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41907,1689200287570-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:07,847 INFO [RS:3;jenkins-hbase4:41907] regionserver.Replication(203): jenkins-hbase4.apache.org,41907,1689200287570 started 2023-07-12 22:18:07,847 INFO [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41907,1689200287570, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41907, sessionid=0x1015b9d43b7000b 2023-07-12 22:18:07,847 DEBUG [RS:3;jenkins-hbase4:41907] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 22:18:07,848 DEBUG [RS:3;jenkins-hbase4:41907] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:07,848 DEBUG [RS:3;jenkins-hbase4:41907] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41907,1689200287570' 2023-07-12 22:18:07,848 DEBUG [RS:3;jenkins-hbase4:41907] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 22:18:07,848 DEBUG [RS:3;jenkins-hbase4:41907] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 22:18:07,849 DEBUG [RS:3;jenkins-hbase4:41907] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 22:18:07,849 DEBUG [RS:3;jenkins-hbase4:41907] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 22:18:07,849 DEBUG [RS:3;jenkins-hbase4:41907] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:07,849 DEBUG [RS:3;jenkins-hbase4:41907] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41907,1689200287570' 2023-07-12 22:18:07,849 DEBUG [RS:3;jenkins-hbase4:41907] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 22:18:07,849 DEBUG [RS:3;jenkins-hbase4:41907] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 22:18:07,850 DEBUG [RS:3;jenkins-hbase4:41907] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 22:18:07,850 INFO [RS:3;jenkins-hbase4:41907] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 22:18:07,850 INFO [RS:3;jenkins-hbase4:41907] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 22:18:07,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:07,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:07,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:07,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:07,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:07,867 DEBUG [hconnection-0x724df952-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:07,871 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49502, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:07,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:07,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:07,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:07,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:07,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:45482 deadline: 1689201487887, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:07,890 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:07,892 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:07,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:07,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:07,894 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:07,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:07,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:07,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:07,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:07,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:07,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:07,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:07,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:07,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:07,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:07,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:07,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:07,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:37441] to rsgroup Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:07,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:07,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:07,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:07,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:07,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(238): Moving server region 014ce980d8eb773efb72cff5eb62d9a2, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:07,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:07,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:07,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:07,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:07,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:07,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=014ce980d8eb773efb72cff5eb62d9a2, REOPEN/MOVE 2023-07-12 22:18:07,933 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=014ce980d8eb773efb72cff5eb62d9a2, REOPEN/MOVE 2023-07-12 22:18:07,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:07,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:07,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:07,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:07,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:07,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:07,935 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=014ce980d8eb773efb72cff5eb62d9a2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:07,935 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689200287935"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200287935"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200287935"}]},"ts":"1689200287935"} 2023-07-12 22:18:07,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-12 22:18:07,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(238): Moving server region 5dadbef7ea97919927df58525570971d, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:07,937 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-12 22:18:07,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:07,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:07,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:07,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:07,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:07,938 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37441,1689200282765, state=CLOSING 2023-07-12 22:18:07,939 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE; CloseRegionProcedure 014ce980d8eb773efb72cff5eb62d9a2, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:07,940 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 22:18:07,940 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:07,940 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 22:18:07,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=5dadbef7ea97919927df58525570971d, REOPEN/MOVE 2023-07-12 22:18:07,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-12 22:18:07,943 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=5dadbef7ea97919927df58525570971d, REOPEN/MOVE 2023-07-12 22:18:07,946 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=15, ppid=12, state=RUNNABLE; CloseRegionProcedure 014ce980d8eb773efb72cff5eb62d9a2, server=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:07,953 INFO [RS:3;jenkins-hbase4:41907] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41907%2C1689200287570, suffix=, logDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,41907,1689200287570, archiveDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/oldWALs, maxLogs=32 2023-07-12 22:18:07,978 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK] 2023-07-12 22:18:07,979 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK] 2023-07-12 22:18:07,979 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK] 2023-07-12 22:18:07,985 INFO [RS:3;jenkins-hbase4:41907] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,41907,1689200287570/jenkins-hbase4.apache.org%2C41907%2C1689200287570.1689200287954 2023-07-12 22:18:07,994 DEBUG [RS:3;jenkins-hbase4:41907] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK], DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK], DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK]] 2023-07-12 22:18:08,104 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-12 22:18:08,105 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 22:18:08,105 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 22:18:08,105 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 22:18:08,105 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 22:18:08,105 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 22:18:08,106 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.85 KB heapSize=5.58 KB 2023-07-12 22:18:08,213 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.67 KB at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/.tmp/info/3c4077dfc639489fbdcea3a1faba5060 2023-07-12 22:18:08,292 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/.tmp/table/f623fc8e380b457981137dcabc991d06 2023-07-12 22:18:08,304 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/.tmp/info/3c4077dfc639489fbdcea3a1faba5060 as hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/info/3c4077dfc639489fbdcea3a1faba5060 2023-07-12 22:18:08,314 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/info/3c4077dfc639489fbdcea3a1faba5060, entries=21, sequenceid=15, filesize=7.1 K 2023-07-12 22:18:08,318 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/.tmp/table/f623fc8e380b457981137dcabc991d06 as hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/table/f623fc8e380b457981137dcabc991d06 2023-07-12 22:18:08,326 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/table/f623fc8e380b457981137dcabc991d06, entries=4, sequenceid=15, filesize=4.8 K 2023-07-12 22:18:08,329 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.85 KB/2916, heapSize ~5.30 KB/5424, currentSize=0 B/0 for 1588230740 in 222ms, sequenceid=15, compaction requested=false 2023-07-12 22:18:08,330 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 22:18:08,343 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=1 2023-07-12 22:18:08,344 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 22:18:08,344 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 22:18:08,344 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 22:18:08,345 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,44439,1689200283155 record at close sequenceid=15 2023-07-12 22:18:08,349 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-12 22:18:08,350 WARN [PEWorker-3] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-12 22:18:08,354 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-12 22:18:08,354 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37441,1689200282765 in 410 msec 2023-07-12 22:18:08,356 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44439,1689200283155; forceNewPlan=false, retain=false 2023-07-12 22:18:08,506 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:08,506 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44439,1689200283155, state=OPENING 2023-07-12 22:18:08,508 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 22:18:08,508 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=13, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:08,508 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 22:18:08,662 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:08,663 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:08,664 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54036, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:08,669 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 22:18:08,669 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:08,671 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44439%2C1689200283155.meta, suffix=.meta, logDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,44439,1689200283155, archiveDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/oldWALs, maxLogs=32 2023-07-12 22:18:08,694 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK] 2023-07-12 22:18:08,696 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK] 2023-07-12 22:18:08,696 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK] 2023-07-12 22:18:08,701 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/WALs/jenkins-hbase4.apache.org,44439,1689200283155/jenkins-hbase4.apache.org%2C44439%2C1689200283155.meta.1689200288672.meta 2023-07-12 22:18:08,702 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK], DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK], DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK]] 2023-07-12 22:18:08,702 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:08,702 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 22:18:08,703 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 22:18:08,703 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 22:18:08,703 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 22:18:08,703 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:08,703 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 22:18:08,703 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 22:18:08,708 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 22:18:08,715 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/info 2023-07-12 22:18:08,715 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/info 2023-07-12 22:18:08,716 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 22:18:08,736 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/info/3c4077dfc639489fbdcea3a1faba5060 2023-07-12 22:18:08,737 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:08,737 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 22:18:08,740 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/rep_barrier 2023-07-12 22:18:08,740 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/rep_barrier 2023-07-12 22:18:08,740 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 22:18:08,751 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:08,751 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 22:18:08,753 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/table 2023-07-12 22:18:08,753 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/table 2023-07-12 22:18:08,754 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 22:18:08,782 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/table/f623fc8e380b457981137dcabc991d06 2023-07-12 22:18:08,783 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:08,784 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740 2023-07-12 22:18:08,788 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740 2023-07-12 22:18:08,791 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 22:18:08,794 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 22:18:08,795 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=19; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10402500000, jitterRate=-0.031191691756248474}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 22:18:08,795 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 22:18:08,797 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=17, masterSystemTime=1689200288662 2023-07-12 22:18:08,802 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 22:18:08,803 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 22:18:08,803 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44439,1689200283155, state=OPEN 2023-07-12 22:18:08,805 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 22:18:08,805 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 22:18:08,808 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=5dadbef7ea97919927df58525570971d, REOPEN/MOVE 2023-07-12 22:18:08,812 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=5dadbef7ea97919927df58525570971d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:08,813 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200288812"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200288812"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200288812"}]},"ts":"1689200288812"} 2023-07-12 22:18:08,814 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37441] ipc.CallRunner(144): callId: 40 service: ClientService methodName: Mutate size: 279 connection: 172.31.14.131:49472 deadline: 1689200348814, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44439 startCode=1689200283155. As of locationSeqNum=15. 2023-07-12 22:18:08,814 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=13 2023-07-12 22:18:08,814 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=13, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44439,1689200283155 in 298 msec 2023-07-12 22:18:08,817 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 879 msec 2023-07-12 22:18:08,916 DEBUG [PEWorker-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:08,919 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54052, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:08,923 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=14, state=RUNNABLE; CloseRegionProcedure 5dadbef7ea97919927df58525570971d, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:08,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-12 22:18:08,962 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:08,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 014ce980d8eb773efb72cff5eb62d9a2, disabling compactions & flushes 2023-07-12 22:18:08,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:08,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:08,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. after waiting 0 ms 2023-07-12 22:18:08,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:08,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 014ce980d8eb773efb72cff5eb62d9a2 1/1 column families, dataSize=1.38 KB heapSize=2.35 KB 2023-07-12 22:18:09,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/.tmp/m/a6ae9f6c5df24cb09471deeb16b3fd2d 2023-07-12 22:18:09,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/.tmp/m/a6ae9f6c5df24cb09471deeb16b3fd2d as hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/m/a6ae9f6c5df24cb09471deeb16b3fd2d 2023-07-12 22:18:09,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/m/a6ae9f6c5df24cb09471deeb16b3fd2d, entries=3, sequenceid=9, filesize=5.2 K 2023-07-12 22:18:09,039 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1410, heapSize ~2.34 KB/2392, currentSize=0 B/0 for 014ce980d8eb773efb72cff5eb62d9a2 in 75ms, sequenceid=9, compaction requested=false 2023-07-12 22:18:09,039 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 22:18:09,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-12 22:18:09,050 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 22:18:09,051 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:09,051 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 014ce980d8eb773efb72cff5eb62d9a2: 2023-07-12 22:18:09,051 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 014ce980d8eb773efb72cff5eb62d9a2 move to jenkins-hbase4.apache.org,44439,1689200283155 record at close sequenceid=9 2023-07-12 22:18:09,053 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:09,054 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=014ce980d8eb773efb72cff5eb62d9a2, regionState=CLOSED 2023-07-12 22:18:09,054 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689200289054"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200289054"}]},"ts":"1689200289054"} 2023-07-12 22:18:09,059 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-07-12 22:18:09,059 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; CloseRegionProcedure 014ce980d8eb773efb72cff5eb62d9a2, server=jenkins-hbase4.apache.org,37441,1689200282765 in 1.1170 sec 2023-07-12 22:18:09,060 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=014ce980d8eb773efb72cff5eb62d9a2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44439,1689200283155; forceNewPlan=false, retain=false 2023-07-12 22:18:09,077 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:09,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5dadbef7ea97919927df58525570971d, disabling compactions & flushes 2023-07-12 22:18:09,078 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:09,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:09,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. after waiting 0 ms 2023-07-12 22:18:09,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:09,078 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 5dadbef7ea97919927df58525570971d 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-12 22:18:09,106 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/.tmp/info/0be1a286cac24ae4b40e2298b7fa2970 2023-07-12 22:18:09,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/.tmp/info/0be1a286cac24ae4b40e2298b7fa2970 as hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/info/0be1a286cac24ae4b40e2298b7fa2970 2023-07-12 22:18:09,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/info/0be1a286cac24ae4b40e2298b7fa2970, entries=2, sequenceid=6, filesize=4.8 K 2023-07-12 22:18:09,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 5dadbef7ea97919927df58525570971d in 52ms, sequenceid=6, compaction requested=false 2023-07-12 22:18:09,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 22:18:09,150 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-12 22:18:09,152 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:09,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5dadbef7ea97919927df58525570971d: 2023-07-12 22:18:09,152 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5dadbef7ea97919927df58525570971d move to jenkins-hbase4.apache.org,41907,1689200287570 record at close sequenceid=6 2023-07-12 22:18:09,155 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:09,156 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=5dadbef7ea97919927df58525570971d, regionState=CLOSED 2023-07-12 22:18:09,156 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200289156"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200289156"}]},"ts":"1689200289156"} 2023-07-12 22:18:09,161 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=14 2023-07-12 22:18:09,161 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=14, state=SUCCESS; CloseRegionProcedure 5dadbef7ea97919927df58525570971d, server=jenkins-hbase4.apache.org,41059,1689200282965 in 236 msec 2023-07-12 22:18:09,162 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=5dadbef7ea97919927df58525570971d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41907,1689200287570; forceNewPlan=false, retain=false 2023-07-12 22:18:09,163 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 22:18:09,163 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=014ce980d8eb773efb72cff5eb62d9a2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:09,163 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689200289163"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200289163"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200289163"}]},"ts":"1689200289163"} 2023-07-12 22:18:09,164 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=5dadbef7ea97919927df58525570971d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:09,164 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200289164"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200289164"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200289164"}]},"ts":"1689200289164"} 2023-07-12 22:18:09,165 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=12, state=RUNNABLE; OpenRegionProcedure 014ce980d8eb773efb72cff5eb62d9a2, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:09,167 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=14, state=RUNNABLE; OpenRegionProcedure 5dadbef7ea97919927df58525570971d, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:09,320 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:09,321 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:09,327 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35660, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:09,328 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:09,328 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 014ce980d8eb773efb72cff5eb62d9a2, NAME => 'hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:09,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 22:18:09,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. service=MultiRowMutationService 2023-07-12 22:18:09,329 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 22:18:09,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:09,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:09,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:09,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:09,331 INFO [StoreOpener-014ce980d8eb773efb72cff5eb62d9a2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:09,332 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:09,332 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5dadbef7ea97919927df58525570971d, NAME => 'hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:09,333 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:09,333 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:09,333 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:09,333 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:09,333 DEBUG [StoreOpener-014ce980d8eb773efb72cff5eb62d9a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/m 2023-07-12 22:18:09,333 DEBUG [StoreOpener-014ce980d8eb773efb72cff5eb62d9a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/m 2023-07-12 22:18:09,334 INFO [StoreOpener-014ce980d8eb773efb72cff5eb62d9a2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 014ce980d8eb773efb72cff5eb62d9a2 columnFamilyName m 2023-07-12 22:18:09,335 INFO [StoreOpener-5dadbef7ea97919927df58525570971d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:09,338 DEBUG [StoreOpener-5dadbef7ea97919927df58525570971d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/info 2023-07-12 22:18:09,338 DEBUG [StoreOpener-5dadbef7ea97919927df58525570971d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/info 2023-07-12 22:18:09,339 INFO [StoreOpener-5dadbef7ea97919927df58525570971d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5dadbef7ea97919927df58525570971d columnFamilyName info 2023-07-12 22:18:09,346 DEBUG [StoreOpener-014ce980d8eb773efb72cff5eb62d9a2-1] regionserver.HStore(539): loaded hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/m/a6ae9f6c5df24cb09471deeb16b3fd2d 2023-07-12 22:18:09,354 INFO [StoreOpener-014ce980d8eb773efb72cff5eb62d9a2-1] regionserver.HStore(310): Store=014ce980d8eb773efb72cff5eb62d9a2/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:09,361 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:09,364 DEBUG [StoreOpener-5dadbef7ea97919927df58525570971d-1] regionserver.HStore(539): loaded hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/info/0be1a286cac24ae4b40e2298b7fa2970 2023-07-12 22:18:09,365 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:09,365 INFO [StoreOpener-5dadbef7ea97919927df58525570971d-1] regionserver.HStore(310): Store=5dadbef7ea97919927df58525570971d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:09,366 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d 2023-07-12 22:18:09,368 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d 2023-07-12 22:18:09,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:09,372 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 014ce980d8eb773efb72cff5eb62d9a2; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@1610b89c, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:09,372 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 014ce980d8eb773efb72cff5eb62d9a2: 2023-07-12 22:18:09,373 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2., pid=19, masterSystemTime=1689200289318 2023-07-12 22:18:09,375 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:09,376 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5dadbef7ea97919927df58525570971d; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10580193600, jitterRate=-0.014642685651779175}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:09,376 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5dadbef7ea97919927df58525570971d: 2023-07-12 22:18:09,376 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:09,376 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:09,377 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d., pid=20, masterSystemTime=1689200289320 2023-07-12 22:18:09,377 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=014ce980d8eb773efb72cff5eb62d9a2, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:09,380 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689200289377"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200289377"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200289377"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200289377"}]},"ts":"1689200289377"} 2023-07-12 22:18:09,389 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=5dadbef7ea97919927df58525570971d, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:09,389 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200289388"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200289388"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200289388"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200289388"}]},"ts":"1689200289388"} 2023-07-12 22:18:09,392 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=12 2023-07-12 22:18:09,393 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:09,395 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:09,395 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=12, state=SUCCESS; OpenRegionProcedure 014ce980d8eb773efb72cff5eb62d9a2, server=jenkins-hbase4.apache.org,44439,1689200283155 in 219 msec 2023-07-12 22:18:09,400 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=14 2023-07-12 22:18:09,400 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=014ce980d8eb773efb72cff5eb62d9a2, REOPEN/MOVE in 1.4630 sec 2023-07-12 22:18:09,400 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=14, state=SUCCESS; OpenRegionProcedure 5dadbef7ea97919927df58525570971d, server=jenkins-hbase4.apache.org,41907,1689200287570 in 226 msec 2023-07-12 22:18:09,402 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=5dadbef7ea97919927df58525570971d, REOPEN/MOVE in 1.4620 sec 2023-07-12 22:18:09,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1689200282765, jenkins-hbase4.apache.org,41059,1689200282965] are moved back to default 2023-07-12 22:18:09,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:09,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:09,946 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37441] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:49502 deadline: 1689200349946, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44439 startCode=1689200283155. As of locationSeqNum=9. 2023-07-12 22:18:10,049 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37441] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:49502 deadline: 1689200350049, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44439 startCode=1689200283155. As of locationSeqNum=15. 2023-07-12 22:18:10,151 DEBUG [hconnection-0x724df952-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:10,155 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54062, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:10,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:10,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:10,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:10,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:10,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:10,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:10,201 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:10,204 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37441] ipc.CallRunner(144): callId: 52 service: ClientService methodName: ExecService size: 617 connection: 172.31.14.131:49472 deadline: 1689200350204, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44439 startCode=1689200283155. As of locationSeqNum=9. 2023-07-12 22:18:10,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 21 2023-07-12 22:18:10,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 22:18:10,311 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:10,312 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:10,312 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:10,313 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:10,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 22:18:10,322 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:10,328 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:10,328 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:10,328 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:10,328 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:10,329 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:10,329 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240 empty. 2023-07-12 22:18:10,329 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c empty. 2023-07-12 22:18:10,329 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e empty. 2023-07-12 22:18:10,329 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c empty. 2023-07-12 22:18:10,332 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:10,332 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c empty. 2023-07-12 22:18:10,332 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:10,333 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:10,333 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:10,333 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:10,333 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 22:18:10,369 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:10,371 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => fb8ce840cf7fa1f0220af2d8aa3dd240, NAME => 'Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:10,371 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => f03805502a7075bca1f3a0dd39fad28c, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:10,374 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => b0d715e018ed345e9f110519904a740c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:10,410 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:10,410 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing fb8ce840cf7fa1f0220af2d8aa3dd240, disabling compactions & flushes 2023-07-12 22:18:10,411 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:10,411 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:10,411 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. after waiting 0 ms 2023-07-12 22:18:10,411 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:10,411 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:10,411 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:10,411 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for fb8ce840cf7fa1f0220af2d8aa3dd240: 2023-07-12 22:18:10,412 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing f03805502a7075bca1f3a0dd39fad28c, disabling compactions & flushes 2023-07-12 22:18:10,412 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:10,413 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:10,413 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => fecf495a145b327e29e809685aeb7d2e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:10,413 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. after waiting 0 ms 2023-07-12 22:18:10,413 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:10,413 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:10,413 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for f03805502a7075bca1f3a0dd39fad28c: 2023-07-12 22:18:10,414 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 8f7d61dcd144d9e188a467ad467dcb0c, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:10,419 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:10,420 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing b0d715e018ed345e9f110519904a740c, disabling compactions & flushes 2023-07-12 22:18:10,420 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:10,420 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:10,420 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. after waiting 0 ms 2023-07-12 22:18:10,420 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:10,420 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:10,420 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for b0d715e018ed345e9f110519904a740c: 2023-07-12 22:18:10,439 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:10,440 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing fecf495a145b327e29e809685aeb7d2e, disabling compactions & flushes 2023-07-12 22:18:10,440 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:10,440 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:10,440 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. after waiting 0 ms 2023-07-12 22:18:10,440 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:10,440 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:10,440 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for fecf495a145b327e29e809685aeb7d2e: 2023-07-12 22:18:10,441 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:10,441 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 8f7d61dcd144d9e188a467ad467dcb0c, disabling compactions & flushes 2023-07-12 22:18:10,441 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:10,441 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:10,441 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. after waiting 0 ms 2023-07-12 22:18:10,441 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:10,442 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:10,442 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 8f7d61dcd144d9e188a467ad467dcb0c: 2023-07-12 22:18:10,445 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:10,446 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200290446"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200290446"}]},"ts":"1689200290446"} 2023-07-12 22:18:10,447 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200290446"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200290446"}]},"ts":"1689200290446"} 2023-07-12 22:18:10,447 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200290446"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200290446"}]},"ts":"1689200290446"} 2023-07-12 22:18:10,447 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200290446"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200290446"}]},"ts":"1689200290446"} 2023-07-12 22:18:10,447 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200290446"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200290446"}]},"ts":"1689200290446"} 2023-07-12 22:18:10,496 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 22:18:10,498 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:10,498 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200290498"}]},"ts":"1689200290498"} 2023-07-12 22:18:10,510 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-12 22:18:10,515 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:10,515 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:10,515 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:10,515 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:10,516 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fb8ce840cf7fa1f0220af2d8aa3dd240, ASSIGN}, {pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f03805502a7075bca1f3a0dd39fad28c, ASSIGN}, {pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0d715e018ed345e9f110519904a740c, ASSIGN}, {pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fecf495a145b327e29e809685aeb7d2e, ASSIGN}, {pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f7d61dcd144d9e188a467ad467dcb0c, ASSIGN}] 2023-07-12 22:18:10,519 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0d715e018ed345e9f110519904a740c, ASSIGN 2023-07-12 22:18:10,520 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fb8ce840cf7fa1f0220af2d8aa3dd240, ASSIGN 2023-07-12 22:18:10,521 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f03805502a7075bca1f3a0dd39fad28c, ASSIGN 2023-07-12 22:18:10,522 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fecf495a145b327e29e809685aeb7d2e, ASSIGN 2023-07-12 22:18:10,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 22:18:10,531 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0d715e018ed345e9f110519904a740c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41907,1689200287570; forceNewPlan=false, retain=false 2023-07-12 22:18:10,531 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fb8ce840cf7fa1f0220af2d8aa3dd240, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41907,1689200287570; forceNewPlan=false, retain=false 2023-07-12 22:18:10,531 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fecf495a145b327e29e809685aeb7d2e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44439,1689200283155; forceNewPlan=false, retain=false 2023-07-12 22:18:10,531 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f03805502a7075bca1f3a0dd39fad28c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44439,1689200283155; forceNewPlan=false, retain=false 2023-07-12 22:18:10,532 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f7d61dcd144d9e188a467ad467dcb0c, ASSIGN 2023-07-12 22:18:10,535 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f7d61dcd144d9e188a467ad467dcb0c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41907,1689200287570; forceNewPlan=false, retain=false 2023-07-12 22:18:10,681 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 22:18:10,685 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=fecf495a145b327e29e809685aeb7d2e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:10,686 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200290685"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200290685"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200290685"}]},"ts":"1689200290685"} 2023-07-12 22:18:10,686 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=f03805502a7075bca1f3a0dd39fad28c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:10,686 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=fb8ce840cf7fa1f0220af2d8aa3dd240, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:10,686 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200290686"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200290686"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200290686"}]},"ts":"1689200290686"} 2023-07-12 22:18:10,687 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200290686"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200290686"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200290686"}]},"ts":"1689200290686"} 2023-07-12 22:18:10,686 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=8f7d61dcd144d9e188a467ad467dcb0c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:10,687 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200290686"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200290686"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200290686"}]},"ts":"1689200290686"} 2023-07-12 22:18:10,686 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=b0d715e018ed345e9f110519904a740c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:10,688 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200290686"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200290686"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200290686"}]},"ts":"1689200290686"} 2023-07-12 22:18:10,690 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=25, state=RUNNABLE; OpenRegionProcedure fecf495a145b327e29e809685aeb7d2e, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:10,691 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=23, state=RUNNABLE; OpenRegionProcedure f03805502a7075bca1f3a0dd39fad28c, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:10,696 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=22, state=RUNNABLE; OpenRegionProcedure fb8ce840cf7fa1f0220af2d8aa3dd240, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:10,702 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=26, state=RUNNABLE; OpenRegionProcedure 8f7d61dcd144d9e188a467ad467dcb0c, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:10,704 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=24, state=RUNNABLE; OpenRegionProcedure b0d715e018ed345e9f110519904a740c, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:10,813 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 22:18:10,814 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-12 22:18:10,815 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 22:18:10,815 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-12 22:18:10,815 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 22:18:10,815 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-12 22:18:10,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 22:18:10,858 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:10,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fb8ce840cf7fa1f0220af2d8aa3dd240, NAME => 'Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 22:18:10,859 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:10,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fecf495a145b327e29e809685aeb7d2e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 22:18:10,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:10,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:10,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:10,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:10,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:10,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:10,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:10,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:10,861 INFO [StoreOpener-fecf495a145b327e29e809685aeb7d2e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:10,862 INFO [StoreOpener-fb8ce840cf7fa1f0220af2d8aa3dd240-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:10,864 DEBUG [StoreOpener-fb8ce840cf7fa1f0220af2d8aa3dd240-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240/f 2023-07-12 22:18:10,864 DEBUG [StoreOpener-fb8ce840cf7fa1f0220af2d8aa3dd240-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240/f 2023-07-12 22:18:10,865 INFO [StoreOpener-fb8ce840cf7fa1f0220af2d8aa3dd240-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fb8ce840cf7fa1f0220af2d8aa3dd240 columnFamilyName f 2023-07-12 22:18:10,866 INFO [StoreOpener-fb8ce840cf7fa1f0220af2d8aa3dd240-1] regionserver.HStore(310): Store=fb8ce840cf7fa1f0220af2d8aa3dd240/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:10,866 DEBUG [StoreOpener-fecf495a145b327e29e809685aeb7d2e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e/f 2023-07-12 22:18:10,868 DEBUG [StoreOpener-fecf495a145b327e29e809685aeb7d2e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e/f 2023-07-12 22:18:10,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:10,869 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:10,869 INFO [StoreOpener-fecf495a145b327e29e809685aeb7d2e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fecf495a145b327e29e809685aeb7d2e columnFamilyName f 2023-07-12 22:18:10,871 INFO [StoreOpener-fecf495a145b327e29e809685aeb7d2e-1] regionserver.HStore(310): Store=fecf495a145b327e29e809685aeb7d2e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:10,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:10,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:10,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:10,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:10,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:10,881 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fb8ce840cf7fa1f0220af2d8aa3dd240; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11189811040, jitterRate=0.042132362723350525}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:10,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fb8ce840cf7fa1f0220af2d8aa3dd240: 2023-07-12 22:18:10,882 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240., pid=29, masterSystemTime=1689200290852 2023-07-12 22:18:10,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:10,885 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fecf495a145b327e29e809685aeb7d2e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10578513920, jitterRate=-0.014799118041992188}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:10,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:10,885 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:10,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:10,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b0d715e018ed345e9f110519904a740c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 22:18:10,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fecf495a145b327e29e809685aeb7d2e: 2023-07-12 22:18:10,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:10,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:10,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:10,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:10,887 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e., pid=27, masterSystemTime=1689200290848 2023-07-12 22:18:10,889 INFO [StoreOpener-b0d715e018ed345e9f110519904a740c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:10,891 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=fb8ce840cf7fa1f0220af2d8aa3dd240, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:10,891 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200290891"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200290891"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200290891"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200290891"}]},"ts":"1689200290891"} 2023-07-12 22:18:10,892 DEBUG [StoreOpener-b0d715e018ed345e9f110519904a740c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c/f 2023-07-12 22:18:10,892 DEBUG [StoreOpener-b0d715e018ed345e9f110519904a740c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c/f 2023-07-12 22:18:10,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:10,894 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:10,894 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:10,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f03805502a7075bca1f3a0dd39fad28c, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 22:18:10,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:10,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:10,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:10,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:10,895 INFO [StoreOpener-b0d715e018ed345e9f110519904a740c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b0d715e018ed345e9f110519904a740c columnFamilyName f 2023-07-12 22:18:10,897 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=fecf495a145b327e29e809685aeb7d2e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:10,897 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200290897"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200290897"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200290897"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200290897"}]},"ts":"1689200290897"} 2023-07-12 22:18:10,898 INFO [StoreOpener-b0d715e018ed345e9f110519904a740c-1] regionserver.HStore(310): Store=b0d715e018ed345e9f110519904a740c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:10,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:10,909 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=22 2023-07-12 22:18:10,912 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=22, state=SUCCESS; OpenRegionProcedure fb8ce840cf7fa1f0220af2d8aa3dd240, server=jenkins-hbase4.apache.org,41907,1689200287570 in 206 msec 2023-07-12 22:18:10,912 INFO [StoreOpener-f03805502a7075bca1f3a0dd39fad28c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:10,909 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=25 2023-07-12 22:18:10,912 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=25, state=SUCCESS; OpenRegionProcedure fecf495a145b327e29e809685aeb7d2e, server=jenkins-hbase4.apache.org,44439,1689200283155 in 214 msec 2023-07-12 22:18:10,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:10,915 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fb8ce840cf7fa1f0220af2d8aa3dd240, ASSIGN in 393 msec 2023-07-12 22:18:10,916 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fecf495a145b327e29e809685aeb7d2e, ASSIGN in 393 msec 2023-07-12 22:18:10,917 DEBUG [StoreOpener-f03805502a7075bca1f3a0dd39fad28c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c/f 2023-07-12 22:18:10,917 DEBUG [StoreOpener-f03805502a7075bca1f3a0dd39fad28c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c/f 2023-07-12 22:18:10,917 INFO [StoreOpener-f03805502a7075bca1f3a0dd39fad28c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f03805502a7075bca1f3a0dd39fad28c columnFamilyName f 2023-07-12 22:18:10,918 INFO [StoreOpener-f03805502a7075bca1f3a0dd39fad28c-1] regionserver.HStore(310): Store=f03805502a7075bca1f3a0dd39fad28c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:10,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:10,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:10,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:10,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:10,930 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b0d715e018ed345e9f110519904a740c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9830434880, jitterRate=-0.08446940779685974}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:10,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b0d715e018ed345e9f110519904a740c: 2023-07-12 22:18:10,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:10,935 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c., pid=31, masterSystemTime=1689200290852 2023-07-12 22:18:10,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:10,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:10,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:10,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8f7d61dcd144d9e188a467ad467dcb0c, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 22:18:10,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:10,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:10,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:10,940 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f03805502a7075bca1f3a0dd39fad28c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11557941920, jitterRate=0.0764172226190567}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:10,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:10,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f03805502a7075bca1f3a0dd39fad28c: 2023-07-12 22:18:10,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:10,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c., pid=28, masterSystemTime=1689200290848 2023-07-12 22:18:10,943 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=b0d715e018ed345e9f110519904a740c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:10,944 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200290943"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200290943"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200290943"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200290943"}]},"ts":"1689200290943"} 2023-07-12 22:18:10,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:10,944 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:10,946 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=f03805502a7075bca1f3a0dd39fad28c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:10,949 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200290946"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200290946"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200290946"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200290946"}]},"ts":"1689200290946"} 2023-07-12 22:18:10,958 INFO [StoreOpener-8f7d61dcd144d9e188a467ad467dcb0c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:10,960 DEBUG [StoreOpener-8f7d61dcd144d9e188a467ad467dcb0c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c/f 2023-07-12 22:18:10,960 DEBUG [StoreOpener-8f7d61dcd144d9e188a467ad467dcb0c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c/f 2023-07-12 22:18:10,961 INFO [StoreOpener-8f7d61dcd144d9e188a467ad467dcb0c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8f7d61dcd144d9e188a467ad467dcb0c columnFamilyName f 2023-07-12 22:18:10,962 INFO [StoreOpener-8f7d61dcd144d9e188a467ad467dcb0c-1] regionserver.HStore(310): Store=8f7d61dcd144d9e188a467ad467dcb0c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:10,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:10,967 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=24 2023-07-12 22:18:10,968 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=24, state=SUCCESS; OpenRegionProcedure b0d715e018ed345e9f110519904a740c, server=jenkins-hbase4.apache.org,41907,1689200287570 in 247 msec 2023-07-12 22:18:10,969 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:10,971 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=23 2023-07-12 22:18:10,971 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=23, state=SUCCESS; OpenRegionProcedure f03805502a7075bca1f3a0dd39fad28c, server=jenkins-hbase4.apache.org,44439,1689200283155 in 273 msec 2023-07-12 22:18:10,971 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0d715e018ed345e9f110519904a740c, ASSIGN in 451 msec 2023-07-12 22:18:10,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:10,981 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f03805502a7075bca1f3a0dd39fad28c, ASSIGN in 455 msec 2023-07-12 22:18:11,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:11,005 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8f7d61dcd144d9e188a467ad467dcb0c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10966495840, jitterRate=0.021334514021873474}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:11,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8f7d61dcd144d9e188a467ad467dcb0c: 2023-07-12 22:18:11,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c., pid=30, masterSystemTime=1689200290852 2023-07-12 22:18:11,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:11,020 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:11,021 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=8f7d61dcd144d9e188a467ad467dcb0c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:11,021 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200291021"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200291021"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200291021"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200291021"}]},"ts":"1689200291021"} 2023-07-12 22:18:11,034 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=26 2023-07-12 22:18:11,034 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=26, state=SUCCESS; OpenRegionProcedure 8f7d61dcd144d9e188a467ad467dcb0c, server=jenkins-hbase4.apache.org,41907,1689200287570 in 325 msec 2023-07-12 22:18:11,043 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=21 2023-07-12 22:18:11,043 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f7d61dcd144d9e188a467ad467dcb0c, ASSIGN in 518 msec 2023-07-12 22:18:11,044 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:11,045 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200291044"}]},"ts":"1689200291044"} 2023-07-12 22:18:11,047 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-12 22:18:11,051 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:11,063 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 854 msec 2023-07-12 22:18:11,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 22:18:11,328 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 21 completed 2023-07-12 22:18:11,329 DEBUG [Listener at localhost/40739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-12 22:18:11,330 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:11,334 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37441] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:49492 deadline: 1689200351334, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44439 startCode=1689200283155. As of locationSeqNum=15. 2023-07-12 22:18:11,437 DEBUG [hconnection-0x4b822857-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:11,441 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57346, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:11,451 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-12 22:18:11,452 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:11,452 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-12 22:18:11,453 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:11,458 DEBUG [Listener at localhost/40739] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:11,464 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39382, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:11,468 DEBUG [Listener at localhost/40739] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:11,470 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50294, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:11,471 DEBUG [Listener at localhost/40739] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:11,474 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34036, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:11,476 DEBUG [Listener at localhost/40739] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:11,478 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57362, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:11,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:11,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:11,492 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:11,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:11,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:11,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:11,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:11,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:11,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:11,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(345): Moving region fb8ce840cf7fa1f0220af2d8aa3dd240 to RSGroup Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:11,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:11,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:11,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:11,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:11,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:11,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fb8ce840cf7fa1f0220af2d8aa3dd240, REOPEN/MOVE 2023-07-12 22:18:11,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(345): Moving region f03805502a7075bca1f3a0dd39fad28c to RSGroup Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:11,515 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fb8ce840cf7fa1f0220af2d8aa3dd240, REOPEN/MOVE 2023-07-12 22:18:11,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:11,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:11,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:11,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:11,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:11,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f03805502a7075bca1f3a0dd39fad28c, REOPEN/MOVE 2023-07-12 22:18:11,516 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=fb8ce840cf7fa1f0220af2d8aa3dd240, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:11,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(345): Moving region b0d715e018ed345e9f110519904a740c to RSGroup Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:11,517 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f03805502a7075bca1f3a0dd39fad28c, REOPEN/MOVE 2023-07-12 22:18:11,517 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200291516"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200291516"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200291516"}]},"ts":"1689200291516"} 2023-07-12 22:18:11,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:11,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:11,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:11,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:11,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:11,520 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=32, state=RUNNABLE; CloseRegionProcedure fb8ce840cf7fa1f0220af2d8aa3dd240, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:11,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0d715e018ed345e9f110519904a740c, REOPEN/MOVE 2023-07-12 22:18:11,522 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f03805502a7075bca1f3a0dd39fad28c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:11,523 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0d715e018ed345e9f110519904a740c, REOPEN/MOVE 2023-07-12 22:18:11,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(345): Moving region fecf495a145b327e29e809685aeb7d2e to RSGroup Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:11,526 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200291522"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200291522"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200291522"}]},"ts":"1689200291522"} 2023-07-12 22:18:11,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:11,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:11,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:11,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:11,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:11,532 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=b0d715e018ed345e9f110519904a740c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:11,532 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200291532"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200291532"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200291532"}]},"ts":"1689200291532"} 2023-07-12 22:18:11,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fecf495a145b327e29e809685aeb7d2e, REOPEN/MOVE 2023-07-12 22:18:11,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(345): Moving region 8f7d61dcd144d9e188a467ad467dcb0c to RSGroup Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:11,534 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fecf495a145b327e29e809685aeb7d2e, REOPEN/MOVE 2023-07-12 22:18:11,535 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=33, state=RUNNABLE; CloseRegionProcedure f03805502a7075bca1f3a0dd39fad28c, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:11,536 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=34, state=RUNNABLE; CloseRegionProcedure b0d715e018ed345e9f110519904a740c, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:11,537 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=fecf495a145b327e29e809685aeb7d2e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:11,537 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200291537"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200291537"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200291537"}]},"ts":"1689200291537"} 2023-07-12 22:18:11,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:11,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:11,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:11,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:11,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:11,548 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=36, state=RUNNABLE; CloseRegionProcedure fecf495a145b327e29e809685aeb7d2e, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:11,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=39, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f7d61dcd144d9e188a467ad467dcb0c, REOPEN/MOVE 2023-07-12 22:18:11,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_48110529, current retry=0 2023-07-12 22:18:11,555 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f7d61dcd144d9e188a467ad467dcb0c, REOPEN/MOVE 2023-07-12 22:18:11,557 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=8f7d61dcd144d9e188a467ad467dcb0c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:11,557 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200291557"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200291557"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200291557"}]},"ts":"1689200291557"} 2023-07-12 22:18:11,560 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=39, state=RUNNABLE; CloseRegionProcedure 8f7d61dcd144d9e188a467ad467dcb0c, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:11,609 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 22:18:11,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:11,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fb8ce840cf7fa1f0220af2d8aa3dd240, disabling compactions & flushes 2023-07-12 22:18:11,688 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:11,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:11,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. after waiting 0 ms 2023-07-12 22:18:11,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:11,689 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:11,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f03805502a7075bca1f3a0dd39fad28c, disabling compactions & flushes 2023-07-12 22:18:11,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:11,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:11,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. after waiting 0 ms 2023-07-12 22:18:11,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:11,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:11,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:11,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fb8ce840cf7fa1f0220af2d8aa3dd240: 2023-07-12 22:18:11,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding fb8ce840cf7fa1f0220af2d8aa3dd240 move to jenkins-hbase4.apache.org,37441,1689200282765 record at close sequenceid=2 2023-07-12 22:18:11,704 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:11,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:11,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f03805502a7075bca1f3a0dd39fad28c: 2023-07-12 22:18:11,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:11,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f03805502a7075bca1f3a0dd39fad28c move to jenkins-hbase4.apache.org,37441,1689200282765 record at close sequenceid=2 2023-07-12 22:18:11,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:11,709 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=fb8ce840cf7fa1f0220af2d8aa3dd240, regionState=CLOSED 2023-07-12 22:18:11,709 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200291709"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200291709"}]},"ts":"1689200291709"} 2023-07-12 22:18:11,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8f7d61dcd144d9e188a467ad467dcb0c, disabling compactions & flushes 2023-07-12 22:18:11,710 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:11,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:11,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. after waiting 0 ms 2023-07-12 22:18:11,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:11,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:11,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:11,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fecf495a145b327e29e809685aeb7d2e, disabling compactions & flushes 2023-07-12 22:18:11,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:11,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:11,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. after waiting 0 ms 2023-07-12 22:18:11,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:11,715 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f03805502a7075bca1f3a0dd39fad28c, regionState=CLOSED 2023-07-12 22:18:11,715 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200291715"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200291715"}]},"ts":"1689200291715"} 2023-07-12 22:18:11,729 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=32 2023-07-12 22:18:11,729 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=32, state=SUCCESS; CloseRegionProcedure fb8ce840cf7fa1f0220af2d8aa3dd240, server=jenkins-hbase4.apache.org,41907,1689200287570 in 192 msec 2023-07-12 22:18:11,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:11,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:11,731 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fb8ce840cf7fa1f0220af2d8aa3dd240, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37441,1689200282765; forceNewPlan=false, retain=false 2023-07-12 22:18:11,732 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=33 2023-07-12 22:18:11,732 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=33, state=SUCCESS; CloseRegionProcedure f03805502a7075bca1f3a0dd39fad28c, server=jenkins-hbase4.apache.org,44439,1689200283155 in 185 msec 2023-07-12 22:18:11,733 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f03805502a7075bca1f3a0dd39fad28c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37441,1689200282765; forceNewPlan=false, retain=false 2023-07-12 22:18:11,734 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:11,734 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:11,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8f7d61dcd144d9e188a467ad467dcb0c: 2023-07-12 22:18:11,736 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 8f7d61dcd144d9e188a467ad467dcb0c move to jenkins-hbase4.apache.org,37441,1689200282765 record at close sequenceid=2 2023-07-12 22:18:11,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fecf495a145b327e29e809685aeb7d2e: 2023-07-12 22:18:11,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding fecf495a145b327e29e809685aeb7d2e move to jenkins-hbase4.apache.org,37441,1689200282765 record at close sequenceid=2 2023-07-12 22:18:11,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:11,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:11,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b0d715e018ed345e9f110519904a740c, disabling compactions & flushes 2023-07-12 22:18:11,740 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:11,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:11,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. after waiting 0 ms 2023-07-12 22:18:11,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:11,743 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=8f7d61dcd144d9e188a467ad467dcb0c, regionState=CLOSED 2023-07-12 22:18:11,743 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200291743"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200291743"}]},"ts":"1689200291743"} 2023-07-12 22:18:11,744 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:11,745 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=fecf495a145b327e29e809685aeb7d2e, regionState=CLOSED 2023-07-12 22:18:11,745 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200291745"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200291745"}]},"ts":"1689200291745"} 2023-07-12 22:18:11,749 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=39 2023-07-12 22:18:11,749 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=39, state=SUCCESS; CloseRegionProcedure 8f7d61dcd144d9e188a467ad467dcb0c, server=jenkins-hbase4.apache.org,41907,1689200287570 in 186 msec 2023-07-12 22:18:11,751 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=39, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f7d61dcd144d9e188a467ad467dcb0c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37441,1689200282765; forceNewPlan=false, retain=false 2023-07-12 22:18:11,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:11,759 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:11,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b0d715e018ed345e9f110519904a740c: 2023-07-12 22:18:11,759 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b0d715e018ed345e9f110519904a740c move to jenkins-hbase4.apache.org,41059,1689200282965 record at close sequenceid=2 2023-07-12 22:18:11,761 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=36 2023-07-12 22:18:11,761 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=36, state=SUCCESS; CloseRegionProcedure fecf495a145b327e29e809685aeb7d2e, server=jenkins-hbase4.apache.org,44439,1689200283155 in 200 msec 2023-07-12 22:18:11,762 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:11,762 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fecf495a145b327e29e809685aeb7d2e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37441,1689200282765; forceNewPlan=false, retain=false 2023-07-12 22:18:11,763 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=b0d715e018ed345e9f110519904a740c, regionState=CLOSED 2023-07-12 22:18:11,763 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200291763"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200291763"}]},"ts":"1689200291763"} 2023-07-12 22:18:11,768 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=34 2023-07-12 22:18:11,768 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=34, state=SUCCESS; CloseRegionProcedure b0d715e018ed345e9f110519904a740c, server=jenkins-hbase4.apache.org,41907,1689200287570 in 229 msec 2023-07-12 22:18:11,769 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0d715e018ed345e9f110519904a740c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41059,1689200282965; forceNewPlan=false, retain=false 2023-07-12 22:18:11,881 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 22:18:11,882 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=b0d715e018ed345e9f110519904a740c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:11,882 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200291881"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200291881"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200291881"}]},"ts":"1689200291881"} 2023-07-12 22:18:11,882 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=8f7d61dcd144d9e188a467ad467dcb0c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:11,882 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f03805502a7075bca1f3a0dd39fad28c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:11,882 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200291882"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200291882"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200291882"}]},"ts":"1689200291882"} 2023-07-12 22:18:11,882 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=fecf495a145b327e29e809685aeb7d2e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:11,883 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200291882"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200291882"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200291882"}]},"ts":"1689200291882"} 2023-07-12 22:18:11,882 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200291882"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200291882"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200291882"}]},"ts":"1689200291882"} 2023-07-12 22:18:11,882 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=fb8ce840cf7fa1f0220af2d8aa3dd240, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:11,883 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200291882"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200291882"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200291882"}]},"ts":"1689200291882"} 2023-07-12 22:18:11,886 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=34, state=RUNNABLE; OpenRegionProcedure b0d715e018ed345e9f110519904a740c, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:11,889 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=39, state=RUNNABLE; OpenRegionProcedure 8f7d61dcd144d9e188a467ad467dcb0c, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:11,895 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=36, state=RUNNABLE; OpenRegionProcedure fecf495a145b327e29e809685aeb7d2e, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:11,897 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=33, state=RUNNABLE; OpenRegionProcedure f03805502a7075bca1f3a0dd39fad28c, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:11,898 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=32, state=RUNNABLE; OpenRegionProcedure fb8ce840cf7fa1f0220af2d8aa3dd240, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:12,046 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:12,047 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b0d715e018ed345e9f110519904a740c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 22:18:12,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:12,048 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:12,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:12,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f03805502a7075bca1f3a0dd39fad28c, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 22:18:12,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:12,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:12,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:12,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:12,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:12,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:12,050 INFO [StoreOpener-f03805502a7075bca1f3a0dd39fad28c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:12,055 INFO [StoreOpener-b0d715e018ed345e9f110519904a740c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:12,055 DEBUG [StoreOpener-f03805502a7075bca1f3a0dd39fad28c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c/f 2023-07-12 22:18:12,055 DEBUG [StoreOpener-f03805502a7075bca1f3a0dd39fad28c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c/f 2023-07-12 22:18:12,056 INFO [StoreOpener-f03805502a7075bca1f3a0dd39fad28c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f03805502a7075bca1f3a0dd39fad28c columnFamilyName f 2023-07-12 22:18:12,056 INFO [StoreOpener-f03805502a7075bca1f3a0dd39fad28c-1] regionserver.HStore(310): Store=f03805502a7075bca1f3a0dd39fad28c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:12,057 DEBUG [StoreOpener-b0d715e018ed345e9f110519904a740c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c/f 2023-07-12 22:18:12,057 DEBUG [StoreOpener-b0d715e018ed345e9f110519904a740c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c/f 2023-07-12 22:18:12,057 INFO [StoreOpener-b0d715e018ed345e9f110519904a740c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b0d715e018ed345e9f110519904a740c columnFamilyName f 2023-07-12 22:18:12,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:12,058 INFO [StoreOpener-b0d715e018ed345e9f110519904a740c-1] regionserver.HStore(310): Store=b0d715e018ed345e9f110519904a740c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:12,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:12,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:12,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:12,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:12,067 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f03805502a7075bca1f3a0dd39fad28c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9935079360, jitterRate=-0.07472363114356995}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:12,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f03805502a7075bca1f3a0dd39fad28c: 2023-07-12 22:18:12,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c., pid=45, masterSystemTime=1689200292041 2023-07-12 22:18:12,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:12,070 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b0d715e018ed345e9f110519904a740c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11946177280, jitterRate=0.11257445812225342}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:12,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b0d715e018ed345e9f110519904a740c: 2023-07-12 22:18:12,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:12,071 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:12,071 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:12,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8f7d61dcd144d9e188a467ad467dcb0c, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 22:18:12,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:12,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:12,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:12,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:12,072 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=f03805502a7075bca1f3a0dd39fad28c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:12,073 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200292072"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200292072"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200292072"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200292072"}]},"ts":"1689200292072"} 2023-07-12 22:18:12,073 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c., pid=42, masterSystemTime=1689200292040 2023-07-12 22:18:12,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:12,077 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:12,078 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=b0d715e018ed345e9f110519904a740c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:12,078 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200292078"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200292078"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200292078"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200292078"}]},"ts":"1689200292078"} 2023-07-12 22:18:12,079 INFO [StoreOpener-8f7d61dcd144d9e188a467ad467dcb0c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:12,081 DEBUG [StoreOpener-8f7d61dcd144d9e188a467ad467dcb0c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c/f 2023-07-12 22:18:12,081 DEBUG [StoreOpener-8f7d61dcd144d9e188a467ad467dcb0c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c/f 2023-07-12 22:18:12,081 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=33 2023-07-12 22:18:12,081 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=33, state=SUCCESS; OpenRegionProcedure f03805502a7075bca1f3a0dd39fad28c, server=jenkins-hbase4.apache.org,37441,1689200282765 in 179 msec 2023-07-12 22:18:12,082 INFO [StoreOpener-8f7d61dcd144d9e188a467ad467dcb0c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8f7d61dcd144d9e188a467ad467dcb0c columnFamilyName f 2023-07-12 22:18:12,083 INFO [StoreOpener-8f7d61dcd144d9e188a467ad467dcb0c-1] regionserver.HStore(310): Store=8f7d61dcd144d9e188a467ad467dcb0c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:12,084 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:12,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:12,086 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f03805502a7075bca1f3a0dd39fad28c, REOPEN/MOVE in 566 msec 2023-07-12 22:18:12,086 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=34 2023-07-12 22:18:12,086 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=34, state=SUCCESS; OpenRegionProcedure b0d715e018ed345e9f110519904a740c, server=jenkins-hbase4.apache.org,41059,1689200282965 in 195 msec 2023-07-12 22:18:12,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:12,098 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0d715e018ed345e9f110519904a740c, REOPEN/MOVE in 568 msec 2023-07-12 22:18:12,098 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8f7d61dcd144d9e188a467ad467dcb0c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10082654400, jitterRate=-0.060979634523391724}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:12,098 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8f7d61dcd144d9e188a467ad467dcb0c: 2023-07-12 22:18:12,100 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c., pid=43, masterSystemTime=1689200292041 2023-07-12 22:18:12,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:12,102 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:12,102 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:12,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fecf495a145b327e29e809685aeb7d2e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 22:18:12,102 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=8f7d61dcd144d9e188a467ad467dcb0c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:12,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:12,103 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200292102"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200292102"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200292102"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200292102"}]},"ts":"1689200292102"} 2023-07-12 22:18:12,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:12,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:12,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:12,107 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=39 2023-07-12 22:18:12,107 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=39, state=SUCCESS; OpenRegionProcedure 8f7d61dcd144d9e188a467ad467dcb0c, server=jenkins-hbase4.apache.org,37441,1689200282765 in 216 msec 2023-07-12 22:18:12,109 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=39, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f7d61dcd144d9e188a467ad467dcb0c, REOPEN/MOVE in 559 msec 2023-07-12 22:18:12,111 INFO [StoreOpener-fecf495a145b327e29e809685aeb7d2e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:12,112 DEBUG [StoreOpener-fecf495a145b327e29e809685aeb7d2e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e/f 2023-07-12 22:18:12,112 DEBUG [StoreOpener-fecf495a145b327e29e809685aeb7d2e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e/f 2023-07-12 22:18:12,112 INFO [StoreOpener-fecf495a145b327e29e809685aeb7d2e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fecf495a145b327e29e809685aeb7d2e columnFamilyName f 2023-07-12 22:18:12,113 INFO [StoreOpener-fecf495a145b327e29e809685aeb7d2e-1] regionserver.HStore(310): Store=fecf495a145b327e29e809685aeb7d2e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:12,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:12,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:12,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:12,119 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fecf495a145b327e29e809685aeb7d2e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10033060800, jitterRate=-0.06559839844703674}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:12,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fecf495a145b327e29e809685aeb7d2e: 2023-07-12 22:18:12,121 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e., pid=44, masterSystemTime=1689200292041 2023-07-12 22:18:12,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:12,122 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:12,123 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:12,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fb8ce840cf7fa1f0220af2d8aa3dd240, NAME => 'Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 22:18:12,123 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=fecf495a145b327e29e809685aeb7d2e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:12,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:12,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:12,123 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200292123"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200292123"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200292123"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200292123"}]},"ts":"1689200292123"} 2023-07-12 22:18:12,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:12,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:12,125 INFO [StoreOpener-fb8ce840cf7fa1f0220af2d8aa3dd240-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:12,127 DEBUG [StoreOpener-fb8ce840cf7fa1f0220af2d8aa3dd240-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240/f 2023-07-12 22:18:12,127 DEBUG [StoreOpener-fb8ce840cf7fa1f0220af2d8aa3dd240-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240/f 2023-07-12 22:18:12,127 INFO [StoreOpener-fb8ce840cf7fa1f0220af2d8aa3dd240-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fb8ce840cf7fa1f0220af2d8aa3dd240 columnFamilyName f 2023-07-12 22:18:12,128 INFO [StoreOpener-fb8ce840cf7fa1f0220af2d8aa3dd240-1] regionserver.HStore(310): Store=fb8ce840cf7fa1f0220af2d8aa3dd240/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:12,129 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=36 2023-07-12 22:18:12,129 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:12,129 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=36, state=SUCCESS; OpenRegionProcedure fecf495a145b327e29e809685aeb7d2e, server=jenkins-hbase4.apache.org,37441,1689200282765 in 231 msec 2023-07-12 22:18:12,130 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:12,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:12,135 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fb8ce840cf7fa1f0220af2d8aa3dd240; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10107463360, jitterRate=-0.05866912007331848}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:12,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fb8ce840cf7fa1f0220af2d8aa3dd240: 2023-07-12 22:18:12,137 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240., pid=46, masterSystemTime=1689200292041 2023-07-12 22:18:12,137 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=36, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fecf495a145b327e29e809685aeb7d2e, REOPEN/MOVE in 603 msec 2023-07-12 22:18:12,139 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:12,139 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:12,139 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=fb8ce840cf7fa1f0220af2d8aa3dd240, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:12,140 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200292139"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200292139"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200292139"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200292139"}]},"ts":"1689200292139"} 2023-07-12 22:18:12,143 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=32 2023-07-12 22:18:12,143 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=32, state=SUCCESS; OpenRegionProcedure fb8ce840cf7fa1f0220af2d8aa3dd240, server=jenkins-hbase4.apache.org,37441,1689200282765 in 243 msec 2023-07-12 22:18:12,145 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fb8ce840cf7fa1f0220af2d8aa3dd240, REOPEN/MOVE in 631 msec 2023-07-12 22:18:12,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure.ProcedureSyncWait(216): waitFor pid=32 2023-07-12 22:18:12,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_48110529. 2023-07-12 22:18:12,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:12,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:12,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:12,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:12,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:12,561 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:12,567 INFO [Listener at localhost/40739] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:12,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:12,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=47, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:12,584 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200292584"}]},"ts":"1689200292584"} 2023-07-12 22:18:12,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-12 22:18:12,586 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-12 22:18:12,589 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-12 22:18:12,594 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fb8ce840cf7fa1f0220af2d8aa3dd240, UNASSIGN}, {pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f03805502a7075bca1f3a0dd39fad28c, UNASSIGN}, {pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0d715e018ed345e9f110519904a740c, UNASSIGN}, {pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fecf495a145b327e29e809685aeb7d2e, UNASSIGN}, {pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f7d61dcd144d9e188a467ad467dcb0c, UNASSIGN}] 2023-07-12 22:18:12,597 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f03805502a7075bca1f3a0dd39fad28c, UNASSIGN 2023-07-12 22:18:12,597 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fb8ce840cf7fa1f0220af2d8aa3dd240, UNASSIGN 2023-07-12 22:18:12,597 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fecf495a145b327e29e809685aeb7d2e, UNASSIGN 2023-07-12 22:18:12,597 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0d715e018ed345e9f110519904a740c, UNASSIGN 2023-07-12 22:18:12,598 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f7d61dcd144d9e188a467ad467dcb0c, UNASSIGN 2023-07-12 22:18:12,599 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=f03805502a7075bca1f3a0dd39fad28c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:12,599 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=fecf495a145b327e29e809685aeb7d2e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:12,599 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=fb8ce840cf7fa1f0220af2d8aa3dd240, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:12,599 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200292599"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200292599"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200292599"}]},"ts":"1689200292599"} 2023-07-12 22:18:12,599 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=b0d715e018ed345e9f110519904a740c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:12,599 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200292599"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200292599"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200292599"}]},"ts":"1689200292599"} 2023-07-12 22:18:12,599 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200292599"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200292599"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200292599"}]},"ts":"1689200292599"} 2023-07-12 22:18:12,599 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200292599"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200292599"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200292599"}]},"ts":"1689200292599"} 2023-07-12 22:18:12,599 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=8f7d61dcd144d9e188a467ad467dcb0c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:12,600 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200292599"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200292599"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200292599"}]},"ts":"1689200292599"} 2023-07-12 22:18:12,604 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=51, state=RUNNABLE; CloseRegionProcedure fecf495a145b327e29e809685aeb7d2e, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:12,605 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=50, state=RUNNABLE; CloseRegionProcedure b0d715e018ed345e9f110519904a740c, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:12,607 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=49, state=RUNNABLE; CloseRegionProcedure f03805502a7075bca1f3a0dd39fad28c, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:12,608 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=48, state=RUNNABLE; CloseRegionProcedure fb8ce840cf7fa1f0220af2d8aa3dd240, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:12,609 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=52, state=RUNNABLE; CloseRegionProcedure 8f7d61dcd144d9e188a467ad467dcb0c, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:12,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-12 22:18:12,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:12,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:12,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b0d715e018ed345e9f110519904a740c, disabling compactions & flushes 2023-07-12 22:18:12,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fb8ce840cf7fa1f0220af2d8aa3dd240, disabling compactions & flushes 2023-07-12 22:18:12,765 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:12,765 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:12,765 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:12,765 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:12,765 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. after waiting 0 ms 2023-07-12 22:18:12,765 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. after waiting 0 ms 2023-07-12 22:18:12,765 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:12,765 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:12,780 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 22:18:12,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 22:18:12,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240. 2023-07-12 22:18:12,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fb8ce840cf7fa1f0220af2d8aa3dd240: 2023-07-12 22:18:12,784 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c. 2023-07-12 22:18:12,784 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b0d715e018ed345e9f110519904a740c: 2023-07-12 22:18:12,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:12,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:12,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8f7d61dcd144d9e188a467ad467dcb0c, disabling compactions & flushes 2023-07-12 22:18:12,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:12,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:12,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. after waiting 1 ms 2023-07-12 22:18:12,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:12,792 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=fb8ce840cf7fa1f0220af2d8aa3dd240, regionState=CLOSED 2023-07-12 22:18:12,792 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200292792"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200292792"}]},"ts":"1689200292792"} 2023-07-12 22:18:12,792 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:12,793 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=b0d715e018ed345e9f110519904a740c, regionState=CLOSED 2023-07-12 22:18:12,793 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200292793"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200292793"}]},"ts":"1689200292793"} 2023-07-12 22:18:12,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 22:18:12,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c. 2023-07-12 22:18:12,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8f7d61dcd144d9e188a467ad467dcb0c: 2023-07-12 22:18:12,807 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:12,807 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:12,808 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=8f7d61dcd144d9e188a467ad467dcb0c, regionState=CLOSED 2023-07-12 22:18:12,808 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200292808"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200292808"}]},"ts":"1689200292808"} 2023-07-12 22:18:12,810 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=48 2023-07-12 22:18:12,810 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=48, state=SUCCESS; CloseRegionProcedure fb8ce840cf7fa1f0220af2d8aa3dd240, server=jenkins-hbase4.apache.org,37441,1689200282765 in 186 msec 2023-07-12 22:18:12,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fecf495a145b327e29e809685aeb7d2e, disabling compactions & flushes 2023-07-12 22:18:12,812 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:12,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:12,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. after waiting 0 ms 2023-07-12 22:18:12,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:12,813 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=50 2023-07-12 22:18:12,813 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=50, state=SUCCESS; CloseRegionProcedure b0d715e018ed345e9f110519904a740c, server=jenkins-hbase4.apache.org,41059,1689200282965 in 200 msec 2023-07-12 22:18:12,815 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fb8ce840cf7fa1f0220af2d8aa3dd240, UNASSIGN in 219 msec 2023-07-12 22:18:12,816 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0d715e018ed345e9f110519904a740c, UNASSIGN in 222 msec 2023-07-12 22:18:12,816 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=52 2023-07-12 22:18:12,816 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; CloseRegionProcedure 8f7d61dcd144d9e188a467ad467dcb0c, server=jenkins-hbase4.apache.org,37441,1689200282765 in 202 msec 2023-07-12 22:18:12,818 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f7d61dcd144d9e188a467ad467dcb0c, UNASSIGN in 222 msec 2023-07-12 22:18:12,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 22:18:12,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e. 2023-07-12 22:18:12,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fecf495a145b327e29e809685aeb7d2e: 2023-07-12 22:18:12,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:12,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:12,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f03805502a7075bca1f3a0dd39fad28c, disabling compactions & flushes 2023-07-12 22:18:12,823 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:12,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:12,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. after waiting 0 ms 2023-07-12 22:18:12,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:12,826 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=fecf495a145b327e29e809685aeb7d2e, regionState=CLOSED 2023-07-12 22:18:12,827 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200292826"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200292826"}]},"ts":"1689200292826"} 2023-07-12 22:18:12,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 22:18:12,831 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=51 2023-07-12 22:18:12,831 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=51, state=SUCCESS; CloseRegionProcedure fecf495a145b327e29e809685aeb7d2e, server=jenkins-hbase4.apache.org,37441,1689200282765 in 225 msec 2023-07-12 22:18:12,833 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c. 2023-07-12 22:18:12,833 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f03805502a7075bca1f3a0dd39fad28c: 2023-07-12 22:18:12,833 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fecf495a145b327e29e809685aeb7d2e, UNASSIGN in 240 msec 2023-07-12 22:18:12,835 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:12,836 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=f03805502a7075bca1f3a0dd39fad28c, regionState=CLOSED 2023-07-12 22:18:12,836 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200292836"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200292836"}]},"ts":"1689200292836"} 2023-07-12 22:18:12,841 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=49 2023-07-12 22:18:12,841 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=49, state=SUCCESS; CloseRegionProcedure f03805502a7075bca1f3a0dd39fad28c, server=jenkins-hbase4.apache.org,37441,1689200282765 in 231 msec 2023-07-12 22:18:12,843 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=47 2023-07-12 22:18:12,843 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f03805502a7075bca1f3a0dd39fad28c, UNASSIGN in 250 msec 2023-07-12 22:18:12,844 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200292844"}]},"ts":"1689200292844"} 2023-07-12 22:18:12,846 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-12 22:18:12,848 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-12 22:18:12,850 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=47, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 276 msec 2023-07-12 22:18:12,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-12 22:18:12,890 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 47 completed 2023-07-12 22:18:12,891 INFO [Listener at localhost/40739] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:12,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:12,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=58, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-12 22:18:12,909 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-12 22:18:12,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-12 22:18:12,926 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:12,926 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:12,926 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:12,926 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:12,926 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:12,931 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c/recovered.edits] 2023-07-12 22:18:12,931 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c/recovered.edits] 2023-07-12 22:18:12,932 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c/recovered.edits] 2023-07-12 22:18:12,932 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e/recovered.edits] 2023-07-12 22:18:12,933 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240/recovered.edits] 2023-07-12 22:18:12,946 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e/recovered.edits/7.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e/recovered.edits/7.seqid 2023-07-12 22:18:12,946 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c/recovered.edits/7.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c/recovered.edits/7.seqid 2023-07-12 22:18:12,947 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c/recovered.edits/7.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c/recovered.edits/7.seqid 2023-07-12 22:18:12,947 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c/recovered.edits/7.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c/recovered.edits/7.seqid 2023-07-12 22:18:12,948 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fecf495a145b327e29e809685aeb7d2e 2023-07-12 22:18:12,948 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0d715e018ed345e9f110519904a740c 2023-07-12 22:18:12,949 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f03805502a7075bca1f3a0dd39fad28c 2023-07-12 22:18:12,949 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f7d61dcd144d9e188a467ad467dcb0c 2023-07-12 22:18:12,950 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240/recovered.edits/7.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240/recovered.edits/7.seqid 2023-07-12 22:18:12,955 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fb8ce840cf7fa1f0220af2d8aa3dd240 2023-07-12 22:18:12,955 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 22:18:12,993 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-12 22:18:13,004 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-12 22:18:13,005 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-12 22:18:13,006 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200293006"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:13,006 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200293006"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:13,006 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200293006"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:13,006 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200293006"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:13,006 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200293006"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:13,010 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 22:18:13,010 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => fb8ce840cf7fa1f0220af2d8aa3dd240, NAME => 'Group_testTableMoveTruncateAndDrop,,1689200290194.fb8ce840cf7fa1f0220af2d8aa3dd240.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => f03805502a7075bca1f3a0dd39fad28c, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689200290194.f03805502a7075bca1f3a0dd39fad28c.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => b0d715e018ed345e9f110519904a740c, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200290194.b0d715e018ed345e9f110519904a740c.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => fecf495a145b327e29e809685aeb7d2e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200290194.fecf495a145b327e29e809685aeb7d2e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 8f7d61dcd144d9e188a467ad467dcb0c, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689200290194.8f7d61dcd144d9e188a467ad467dcb0c.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 22:18:13,010 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-12 22:18:13,010 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689200293010"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:13,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-12 22:18:13,013 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-12 22:18:13,025 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77 2023-07-12 22:18:13,025 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8 2023-07-12 22:18:13,025 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe 2023-07-12 22:18:13,025 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79 2023-07-12 22:18:13,026 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77 empty. 2023-07-12 22:18:13,025 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566 2023-07-12 22:18:13,027 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79 empty. 2023-07-12 22:18:13,027 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe empty. 2023-07-12 22:18:13,027 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8 empty. 2023-07-12 22:18:13,028 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566 empty. 2023-07-12 22:18:13,028 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe 2023-07-12 22:18:13,028 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8 2023-07-12 22:18:13,028 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77 2023-07-12 22:18:13,028 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79 2023-07-12 22:18:13,029 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566 2023-07-12 22:18:13,029 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 22:18:13,075 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:13,086 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 78285a4919386f8bfeb4ee3b1759ef77, NAME => 'Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:13,095 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => d623ba9fade16a6d6d97c3e6e3958d79, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:13,103 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => ebab7213bb2188e10d3205e6f12bb566, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:13,166 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:13,166 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 78285a4919386f8bfeb4ee3b1759ef77, disabling compactions & flushes 2023-07-12 22:18:13,166 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77. 2023-07-12 22:18:13,166 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77. 2023-07-12 22:18:13,166 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77. after waiting 0 ms 2023-07-12 22:18:13,166 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77. 2023-07-12 22:18:13,167 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77. 2023-07-12 22:18:13,167 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 78285a4919386f8bfeb4ee3b1759ef77: 2023-07-12 22:18:13,167 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 506e4fc2457283b47ee998329cf531e8, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:13,176 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:13,176 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing d623ba9fade16a6d6d97c3e6e3958d79, disabling compactions & flushes 2023-07-12 22:18:13,176 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79. 2023-07-12 22:18:13,176 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79. 2023-07-12 22:18:13,176 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79. after waiting 0 ms 2023-07-12 22:18:13,176 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79. 2023-07-12 22:18:13,176 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79. 2023-07-12 22:18:13,176 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for d623ba9fade16a6d6d97c3e6e3958d79: 2023-07-12 22:18:13,177 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => bd8c5c3c37581d0b133274bac2d0ddbe, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:13,203 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:13,204 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 506e4fc2457283b47ee998329cf531e8, disabling compactions & flushes 2023-07-12 22:18:13,204 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8. 2023-07-12 22:18:13,204 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8. 2023-07-12 22:18:13,204 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8. after waiting 0 ms 2023-07-12 22:18:13,204 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8. 2023-07-12 22:18:13,204 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8. 2023-07-12 22:18:13,204 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 506e4fc2457283b47ee998329cf531e8: 2023-07-12 22:18:13,208 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:13,208 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing ebab7213bb2188e10d3205e6f12bb566, disabling compactions & flushes 2023-07-12 22:18:13,208 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566. 2023-07-12 22:18:13,208 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566. 2023-07-12 22:18:13,208 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566. after waiting 0 ms 2023-07-12 22:18:13,208 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566. 2023-07-12 22:18:13,208 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566. 2023-07-12 22:18:13,208 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for ebab7213bb2188e10d3205e6f12bb566: 2023-07-12 22:18:13,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-12 22:18:13,217 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:13,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing bd8c5c3c37581d0b133274bac2d0ddbe, disabling compactions & flushes 2023-07-12 22:18:13,218 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe. 2023-07-12 22:18:13,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe. 2023-07-12 22:18:13,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe. after waiting 0 ms 2023-07-12 22:18:13,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe. 2023-07-12 22:18:13,218 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe. 2023-07-12 22:18:13,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for bd8c5c3c37581d0b133274bac2d0ddbe: 2023-07-12 22:18:13,224 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200293224"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200293224"}]},"ts":"1689200293224"} 2023-07-12 22:18:13,224 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200293224"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200293224"}]},"ts":"1689200293224"} 2023-07-12 22:18:13,224 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200293224"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200293224"}]},"ts":"1689200293224"} 2023-07-12 22:18:13,224 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200293224"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200293224"}]},"ts":"1689200293224"} 2023-07-12 22:18:13,225 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200293224"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200293224"}]},"ts":"1689200293224"} 2023-07-12 22:18:13,228 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 22:18:13,230 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200293230"}]},"ts":"1689200293230"} 2023-07-12 22:18:13,232 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-12 22:18:13,237 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:13,237 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:13,237 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:13,237 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:13,238 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78285a4919386f8bfeb4ee3b1759ef77, ASSIGN}, {pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebab7213bb2188e10d3205e6f12bb566, ASSIGN}, {pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d623ba9fade16a6d6d97c3e6e3958d79, ASSIGN}, {pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=506e4fc2457283b47ee998329cf531e8, ASSIGN}, {pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd8c5c3c37581d0b133274bac2d0ddbe, ASSIGN}] 2023-07-12 22:18:13,240 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebab7213bb2188e10d3205e6f12bb566, ASSIGN 2023-07-12 22:18:13,241 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebab7213bb2188e10d3205e6f12bb566, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37441,1689200282765; forceNewPlan=false, retain=false 2023-07-12 22:18:13,242 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78285a4919386f8bfeb4ee3b1759ef77, ASSIGN 2023-07-12 22:18:13,243 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d623ba9fade16a6d6d97c3e6e3958d79, ASSIGN 2023-07-12 22:18:13,243 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=506e4fc2457283b47ee998329cf531e8, ASSIGN 2023-07-12 22:18:13,243 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd8c5c3c37581d0b133274bac2d0ddbe, ASSIGN 2023-07-12 22:18:13,246 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d623ba9fade16a6d6d97c3e6e3958d79, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37441,1689200282765; forceNewPlan=false, retain=false 2023-07-12 22:18:13,246 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78285a4919386f8bfeb4ee3b1759ef77, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41059,1689200282965; forceNewPlan=false, retain=false 2023-07-12 22:18:13,247 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=506e4fc2457283b47ee998329cf531e8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41059,1689200282965; forceNewPlan=false, retain=false 2023-07-12 22:18:13,248 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd8c5c3c37581d0b133274bac2d0ddbe, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41059,1689200282965; forceNewPlan=false, retain=false 2023-07-12 22:18:13,392 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 22:18:13,395 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=78285a4919386f8bfeb4ee3b1759ef77, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:13,395 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=d623ba9fade16a6d6d97c3e6e3958d79, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:13,395 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=ebab7213bb2188e10d3205e6f12bb566, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:13,395 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=bd8c5c3c37581d0b133274bac2d0ddbe, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:13,395 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200293395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200293395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200293395"}]},"ts":"1689200293395"} 2023-07-12 22:18:13,395 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=506e4fc2457283b47ee998329cf531e8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:13,395 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200293395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200293395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200293395"}]},"ts":"1689200293395"} 2023-07-12 22:18:13,395 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200293395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200293395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200293395"}]},"ts":"1689200293395"} 2023-07-12 22:18:13,395 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200293395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200293395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200293395"}]},"ts":"1689200293395"} 2023-07-12 22:18:13,395 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200293395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200293395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200293395"}]},"ts":"1689200293395"} 2023-07-12 22:18:13,398 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=60, state=RUNNABLE; OpenRegionProcedure ebab7213bb2188e10d3205e6f12bb566, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:13,402 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=62, state=RUNNABLE; OpenRegionProcedure 506e4fc2457283b47ee998329cf531e8, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:13,402 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=59, state=RUNNABLE; OpenRegionProcedure 78285a4919386f8bfeb4ee3b1759ef77, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:13,404 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=63, state=RUNNABLE; OpenRegionProcedure bd8c5c3c37581d0b133274bac2d0ddbe, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:13,405 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=61, state=RUNNABLE; OpenRegionProcedure d623ba9fade16a6d6d97c3e6e3958d79, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:13,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-12 22:18:13,562 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77. 2023-07-12 22:18:13,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 78285a4919386f8bfeb4ee3b1759ef77, NAME => 'Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 22:18:13,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 78285a4919386f8bfeb4ee3b1759ef77 2023-07-12 22:18:13,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:13,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 78285a4919386f8bfeb4ee3b1759ef77 2023-07-12 22:18:13,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 78285a4919386f8bfeb4ee3b1759ef77 2023-07-12 22:18:13,564 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79. 2023-07-12 22:18:13,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d623ba9fade16a6d6d97c3e6e3958d79, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 22:18:13,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d623ba9fade16a6d6d97c3e6e3958d79 2023-07-12 22:18:13,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:13,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d623ba9fade16a6d6d97c3e6e3958d79 2023-07-12 22:18:13,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d623ba9fade16a6d6d97c3e6e3958d79 2023-07-12 22:18:13,565 INFO [StoreOpener-78285a4919386f8bfeb4ee3b1759ef77-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 78285a4919386f8bfeb4ee3b1759ef77 2023-07-12 22:18:13,566 INFO [StoreOpener-d623ba9fade16a6d6d97c3e6e3958d79-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d623ba9fade16a6d6d97c3e6e3958d79 2023-07-12 22:18:13,567 DEBUG [StoreOpener-78285a4919386f8bfeb4ee3b1759ef77-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77/f 2023-07-12 22:18:13,568 DEBUG [StoreOpener-78285a4919386f8bfeb4ee3b1759ef77-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77/f 2023-07-12 22:18:13,568 INFO [StoreOpener-78285a4919386f8bfeb4ee3b1759ef77-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 78285a4919386f8bfeb4ee3b1759ef77 columnFamilyName f 2023-07-12 22:18:13,569 INFO [StoreOpener-78285a4919386f8bfeb4ee3b1759ef77-1] regionserver.HStore(310): Store=78285a4919386f8bfeb4ee3b1759ef77/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:13,569 DEBUG [StoreOpener-d623ba9fade16a6d6d97c3e6e3958d79-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79/f 2023-07-12 22:18:13,569 DEBUG [StoreOpener-d623ba9fade16a6d6d97c3e6e3958d79-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79/f 2023-07-12 22:18:13,570 INFO [StoreOpener-d623ba9fade16a6d6d97c3e6e3958d79-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d623ba9fade16a6d6d97c3e6e3958d79 columnFamilyName f 2023-07-12 22:18:13,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77 2023-07-12 22:18:13,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77 2023-07-12 22:18:13,574 INFO [StoreOpener-d623ba9fade16a6d6d97c3e6e3958d79-1] regionserver.HStore(310): Store=d623ba9fade16a6d6d97c3e6e3958d79/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:13,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79 2023-07-12 22:18:13,577 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79 2023-07-12 22:18:13,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 78285a4919386f8bfeb4ee3b1759ef77 2023-07-12 22:18:13,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d623ba9fade16a6d6d97c3e6e3958d79 2023-07-12 22:18:13,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:13,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:13,602 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d623ba9fade16a6d6d97c3e6e3958d79; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10920977920, jitterRate=0.017095327377319336}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:13,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d623ba9fade16a6d6d97c3e6e3958d79: 2023-07-12 22:18:13,609 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 78285a4919386f8bfeb4ee3b1759ef77; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10085481920, jitterRate=-0.06071630120277405}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:13,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 78285a4919386f8bfeb4ee3b1759ef77: 2023-07-12 22:18:13,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79., pid=68, masterSystemTime=1689200293558 2023-07-12 22:18:13,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77., pid=66, masterSystemTime=1689200293558 2023-07-12 22:18:13,618 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79. 2023-07-12 22:18:13,618 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79. 2023-07-12 22:18:13,618 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566. 2023-07-12 22:18:13,618 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ebab7213bb2188e10d3205e6f12bb566, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 22:18:13,619 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=d623ba9fade16a6d6d97c3e6e3958d79, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:13,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ebab7213bb2188e10d3205e6f12bb566 2023-07-12 22:18:13,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:13,619 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200293619"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200293619"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200293619"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200293619"}]},"ts":"1689200293619"} 2023-07-12 22:18:13,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ebab7213bb2188e10d3205e6f12bb566 2023-07-12 22:18:13,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ebab7213bb2188e10d3205e6f12bb566 2023-07-12 22:18:13,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77. 2023-07-12 22:18:13,619 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77. 2023-07-12 22:18:13,619 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8. 2023-07-12 22:18:13,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 506e4fc2457283b47ee998329cf531e8, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 22:18:13,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 506e4fc2457283b47ee998329cf531e8 2023-07-12 22:18:13,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:13,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 506e4fc2457283b47ee998329cf531e8 2023-07-12 22:18:13,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 506e4fc2457283b47ee998329cf531e8 2023-07-12 22:18:13,621 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=78285a4919386f8bfeb4ee3b1759ef77, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:13,622 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200293621"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200293621"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200293621"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200293621"}]},"ts":"1689200293621"} 2023-07-12 22:18:13,634 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=61 2023-07-12 22:18:13,635 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=61, state=SUCCESS; OpenRegionProcedure d623ba9fade16a6d6d97c3e6e3958d79, server=jenkins-hbase4.apache.org,37441,1689200282765 in 217 msec 2023-07-12 22:18:13,635 INFO [StoreOpener-ebab7213bb2188e10d3205e6f12bb566-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ebab7213bb2188e10d3205e6f12bb566 2023-07-12 22:18:13,636 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=59 2023-07-12 22:18:13,636 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=59, state=SUCCESS; OpenRegionProcedure 78285a4919386f8bfeb4ee3b1759ef77, server=jenkins-hbase4.apache.org,41059,1689200282965 in 222 msec 2023-07-12 22:18:13,637 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d623ba9fade16a6d6d97c3e6e3958d79, ASSIGN in 397 msec 2023-07-12 22:18:13,637 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78285a4919386f8bfeb4ee3b1759ef77, ASSIGN in 399 msec 2023-07-12 22:18:13,646 INFO [StoreOpener-506e4fc2457283b47ee998329cf531e8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 506e4fc2457283b47ee998329cf531e8 2023-07-12 22:18:13,648 DEBUG [StoreOpener-ebab7213bb2188e10d3205e6f12bb566-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566/f 2023-07-12 22:18:13,648 DEBUG [StoreOpener-ebab7213bb2188e10d3205e6f12bb566-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566/f 2023-07-12 22:18:13,648 INFO [StoreOpener-ebab7213bb2188e10d3205e6f12bb566-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ebab7213bb2188e10d3205e6f12bb566 columnFamilyName f 2023-07-12 22:18:13,649 DEBUG [StoreOpener-506e4fc2457283b47ee998329cf531e8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8/f 2023-07-12 22:18:13,649 DEBUG [StoreOpener-506e4fc2457283b47ee998329cf531e8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8/f 2023-07-12 22:18:13,650 INFO [StoreOpener-ebab7213bb2188e10d3205e6f12bb566-1] regionserver.HStore(310): Store=ebab7213bb2188e10d3205e6f12bb566/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:13,651 INFO [StoreOpener-506e4fc2457283b47ee998329cf531e8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 506e4fc2457283b47ee998329cf531e8 columnFamilyName f 2023-07-12 22:18:13,651 INFO [StoreOpener-506e4fc2457283b47ee998329cf531e8-1] regionserver.HStore(310): Store=506e4fc2457283b47ee998329cf531e8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:13,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566 2023-07-12 22:18:13,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566 2023-07-12 22:18:13,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8 2023-07-12 22:18:13,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8 2023-07-12 22:18:13,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ebab7213bb2188e10d3205e6f12bb566 2023-07-12 22:18:13,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 506e4fc2457283b47ee998329cf531e8 2023-07-12 22:18:13,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:13,661 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ebab7213bb2188e10d3205e6f12bb566; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10783334880, jitterRate=0.0042763203382492065}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:13,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:13,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ebab7213bb2188e10d3205e6f12bb566: 2023-07-12 22:18:13,662 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 506e4fc2457283b47ee998329cf531e8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12055457600, jitterRate=0.12275198101997375}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:13,662 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 506e4fc2457283b47ee998329cf531e8: 2023-07-12 22:18:13,663 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566., pid=64, masterSystemTime=1689200293558 2023-07-12 22:18:13,663 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8., pid=65, masterSystemTime=1689200293558 2023-07-12 22:18:13,667 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566. 2023-07-12 22:18:13,667 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566. 2023-07-12 22:18:13,669 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=ebab7213bb2188e10d3205e6f12bb566, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:13,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8. 2023-07-12 22:18:13,669 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200293668"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200293668"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200293668"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200293668"}]},"ts":"1689200293668"} 2023-07-12 22:18:13,669 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8. 2023-07-12 22:18:13,669 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe. 2023-07-12 22:18:13,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bd8c5c3c37581d0b133274bac2d0ddbe, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 22:18:13,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop bd8c5c3c37581d0b133274bac2d0ddbe 2023-07-12 22:18:13,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:13,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bd8c5c3c37581d0b133274bac2d0ddbe 2023-07-12 22:18:13,670 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=506e4fc2457283b47ee998329cf531e8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:13,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bd8c5c3c37581d0b133274bac2d0ddbe 2023-07-12 22:18:13,670 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200293670"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200293670"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200293670"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200293670"}]},"ts":"1689200293670"} 2023-07-12 22:18:13,676 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=60 2023-07-12 22:18:13,676 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=60, state=SUCCESS; OpenRegionProcedure ebab7213bb2188e10d3205e6f12bb566, server=jenkins-hbase4.apache.org,37441,1689200282765 in 275 msec 2023-07-12 22:18:13,679 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=62 2023-07-12 22:18:13,679 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=62, state=SUCCESS; OpenRegionProcedure 506e4fc2457283b47ee998329cf531e8, server=jenkins-hbase4.apache.org,41059,1689200282965 in 272 msec 2023-07-12 22:18:13,681 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebab7213bb2188e10d3205e6f12bb566, ASSIGN in 439 msec 2023-07-12 22:18:13,682 INFO [StoreOpener-bd8c5c3c37581d0b133274bac2d0ddbe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bd8c5c3c37581d0b133274bac2d0ddbe 2023-07-12 22:18:13,683 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=506e4fc2457283b47ee998329cf531e8, ASSIGN in 442 msec 2023-07-12 22:18:13,685 DEBUG [StoreOpener-bd8c5c3c37581d0b133274bac2d0ddbe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe/f 2023-07-12 22:18:13,685 DEBUG [StoreOpener-bd8c5c3c37581d0b133274bac2d0ddbe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe/f 2023-07-12 22:18:13,685 INFO [StoreOpener-bd8c5c3c37581d0b133274bac2d0ddbe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bd8c5c3c37581d0b133274bac2d0ddbe columnFamilyName f 2023-07-12 22:18:13,686 INFO [StoreOpener-bd8c5c3c37581d0b133274bac2d0ddbe-1] regionserver.HStore(310): Store=bd8c5c3c37581d0b133274bac2d0ddbe/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:13,688 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe 2023-07-12 22:18:13,688 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe 2023-07-12 22:18:13,693 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bd8c5c3c37581d0b133274bac2d0ddbe 2023-07-12 22:18:13,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:13,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bd8c5c3c37581d0b133274bac2d0ddbe; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10117382080, jitterRate=-0.05774536728858948}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:13,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bd8c5c3c37581d0b133274bac2d0ddbe: 2023-07-12 22:18:13,697 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe., pid=67, masterSystemTime=1689200293558 2023-07-12 22:18:13,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe. 2023-07-12 22:18:13,701 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe. 2023-07-12 22:18:13,701 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=bd8c5c3c37581d0b133274bac2d0ddbe, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:13,702 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200293701"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200293701"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200293701"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200293701"}]},"ts":"1689200293701"} 2023-07-12 22:18:13,706 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=63 2023-07-12 22:18:13,706 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; OpenRegionProcedure bd8c5c3c37581d0b133274bac2d0ddbe, server=jenkins-hbase4.apache.org,41059,1689200282965 in 302 msec 2023-07-12 22:18:13,710 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=58 2023-07-12 22:18:13,711 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd8c5c3c37581d0b133274bac2d0ddbe, ASSIGN in 469 msec 2023-07-12 22:18:13,711 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200293711"}]},"ts":"1689200293711"} 2023-07-12 22:18:13,715 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-12 22:18:13,718 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-12 22:18:13,722 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=58, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 819 msec 2023-07-12 22:18:14,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-12 22:18:14,020 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 58 completed 2023-07-12 22:18:14,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:14,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:14,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:14,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:14,023 INFO [Listener at localhost/40739] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:14,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:14,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=69, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:14,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-12 22:18:14,029 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200294028"}]},"ts":"1689200294028"} 2023-07-12 22:18:14,030 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-12 22:18:14,032 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-12 22:18:14,034 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78285a4919386f8bfeb4ee3b1759ef77, UNASSIGN}, {pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebab7213bb2188e10d3205e6f12bb566, UNASSIGN}, {pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d623ba9fade16a6d6d97c3e6e3958d79, UNASSIGN}, {pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=506e4fc2457283b47ee998329cf531e8, UNASSIGN}, {pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd8c5c3c37581d0b133274bac2d0ddbe, UNASSIGN}] 2023-07-12 22:18:14,036 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78285a4919386f8bfeb4ee3b1759ef77, UNASSIGN 2023-07-12 22:18:14,036 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd8c5c3c37581d0b133274bac2d0ddbe, UNASSIGN 2023-07-12 22:18:14,036 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=506e4fc2457283b47ee998329cf531e8, UNASSIGN 2023-07-12 22:18:14,037 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebab7213bb2188e10d3205e6f12bb566, UNASSIGN 2023-07-12 22:18:14,037 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d623ba9fade16a6d6d97c3e6e3958d79, UNASSIGN 2023-07-12 22:18:14,040 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=78285a4919386f8bfeb4ee3b1759ef77, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:14,040 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=ebab7213bb2188e10d3205e6f12bb566, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:14,040 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=506e4fc2457283b47ee998329cf531e8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:14,040 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200294040"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200294040"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200294040"}]},"ts":"1689200294040"} 2023-07-12 22:18:14,040 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200294040"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200294040"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200294040"}]},"ts":"1689200294040"} 2023-07-12 22:18:14,040 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=d623ba9fade16a6d6d97c3e6e3958d79, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:14,041 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200294040"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200294040"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200294040"}]},"ts":"1689200294040"} 2023-07-12 22:18:14,041 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200294040"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200294040"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200294040"}]},"ts":"1689200294040"} 2023-07-12 22:18:14,040 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=bd8c5c3c37581d0b133274bac2d0ddbe, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:14,041 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200294040"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200294040"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200294040"}]},"ts":"1689200294040"} 2023-07-12 22:18:14,042 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=71, state=RUNNABLE; CloseRegionProcedure ebab7213bb2188e10d3205e6f12bb566, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:14,044 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=73, state=RUNNABLE; CloseRegionProcedure 506e4fc2457283b47ee998329cf531e8, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:14,045 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=72, state=RUNNABLE; CloseRegionProcedure d623ba9fade16a6d6d97c3e6e3958d79, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:14,046 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=70, state=RUNNABLE; CloseRegionProcedure 78285a4919386f8bfeb4ee3b1759ef77, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:14,047 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=74, state=RUNNABLE; CloseRegionProcedure bd8c5c3c37581d0b133274bac2d0ddbe, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:14,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-12 22:18:14,197 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ebab7213bb2188e10d3205e6f12bb566 2023-07-12 22:18:14,197 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bd8c5c3c37581d0b133274bac2d0ddbe 2023-07-12 22:18:14,198 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ebab7213bb2188e10d3205e6f12bb566, disabling compactions & flushes 2023-07-12 22:18:14,198 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bd8c5c3c37581d0b133274bac2d0ddbe, disabling compactions & flushes 2023-07-12 22:18:14,199 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566. 2023-07-12 22:18:14,199 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe. 2023-07-12 22:18:14,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566. 2023-07-12 22:18:14,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe. 2023-07-12 22:18:14,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566. after waiting 0 ms 2023-07-12 22:18:14,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe. after waiting 0 ms 2023-07-12 22:18:14,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe. 2023-07-12 22:18:14,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566. 2023-07-12 22:18:14,203 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:14,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:14,204 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe. 2023-07-12 22:18:14,204 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566. 2023-07-12 22:18:14,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bd8c5c3c37581d0b133274bac2d0ddbe: 2023-07-12 22:18:14,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ebab7213bb2188e10d3205e6f12bb566: 2023-07-12 22:18:14,207 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ebab7213bb2188e10d3205e6f12bb566 2023-07-12 22:18:14,207 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d623ba9fade16a6d6d97c3e6e3958d79 2023-07-12 22:18:14,208 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d623ba9fade16a6d6d97c3e6e3958d79, disabling compactions & flushes 2023-07-12 22:18:14,208 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79. 2023-07-12 22:18:14,208 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79. 2023-07-12 22:18:14,208 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79. after waiting 0 ms 2023-07-12 22:18:14,208 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79. 2023-07-12 22:18:14,208 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=ebab7213bb2188e10d3205e6f12bb566, regionState=CLOSED 2023-07-12 22:18:14,208 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200294208"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200294208"}]},"ts":"1689200294208"} 2023-07-12 22:18:14,210 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=bd8c5c3c37581d0b133274bac2d0ddbe, regionState=CLOSED 2023-07-12 22:18:14,210 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200294210"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200294210"}]},"ts":"1689200294210"} 2023-07-12 22:18:14,215 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bd8c5c3c37581d0b133274bac2d0ddbe 2023-07-12 22:18:14,215 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 506e4fc2457283b47ee998329cf531e8 2023-07-12 22:18:14,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 506e4fc2457283b47ee998329cf531e8, disabling compactions & flushes 2023-07-12 22:18:14,216 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8. 2023-07-12 22:18:14,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8. 2023-07-12 22:18:14,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8. after waiting 0 ms 2023-07-12 22:18:14,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8. 2023-07-12 22:18:14,218 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=71 2023-07-12 22:18:14,218 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=71, state=SUCCESS; CloseRegionProcedure ebab7213bb2188e10d3205e6f12bb566, server=jenkins-hbase4.apache.org,37441,1689200282765 in 169 msec 2023-07-12 22:18:14,220 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=74 2023-07-12 22:18:14,220 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=74, state=SUCCESS; CloseRegionProcedure bd8c5c3c37581d0b133274bac2d0ddbe, server=jenkins-hbase4.apache.org,41059,1689200282965 in 168 msec 2023-07-12 22:18:14,221 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ebab7213bb2188e10d3205e6f12bb566, UNASSIGN in 184 msec 2023-07-12 22:18:14,222 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bd8c5c3c37581d0b133274bac2d0ddbe, UNASSIGN in 186 msec 2023-07-12 22:18:14,225 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:14,226 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:14,226 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79. 2023-07-12 22:18:14,226 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d623ba9fade16a6d6d97c3e6e3958d79: 2023-07-12 22:18:14,226 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8. 2023-07-12 22:18:14,226 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 506e4fc2457283b47ee998329cf531e8: 2023-07-12 22:18:14,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d623ba9fade16a6d6d97c3e6e3958d79 2023-07-12 22:18:14,229 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=d623ba9fade16a6d6d97c3e6e3958d79, regionState=CLOSED 2023-07-12 22:18:14,229 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200294229"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200294229"}]},"ts":"1689200294229"} 2023-07-12 22:18:14,229 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 506e4fc2457283b47ee998329cf531e8 2023-07-12 22:18:14,229 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 78285a4919386f8bfeb4ee3b1759ef77 2023-07-12 22:18:14,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 78285a4919386f8bfeb4ee3b1759ef77, disabling compactions & flushes 2023-07-12 22:18:14,230 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77. 2023-07-12 22:18:14,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77. 2023-07-12 22:18:14,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77. after waiting 0 ms 2023-07-12 22:18:14,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77. 2023-07-12 22:18:14,231 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=506e4fc2457283b47ee998329cf531e8, regionState=CLOSED 2023-07-12 22:18:14,231 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689200294231"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200294231"}]},"ts":"1689200294231"} 2023-07-12 22:18:14,234 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=72 2023-07-12 22:18:14,235 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=72, state=SUCCESS; CloseRegionProcedure d623ba9fade16a6d6d97c3e6e3958d79, server=jenkins-hbase4.apache.org,37441,1689200282765 in 187 msec 2023-07-12 22:18:14,235 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=73 2023-07-12 22:18:14,235 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=73, state=SUCCESS; CloseRegionProcedure 506e4fc2457283b47ee998329cf531e8, server=jenkins-hbase4.apache.org,41059,1689200282965 in 189 msec 2023-07-12 22:18:14,237 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d623ba9fade16a6d6d97c3e6e3958d79, UNASSIGN in 201 msec 2023-07-12 22:18:14,237 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=506e4fc2457283b47ee998329cf531e8, UNASSIGN in 201 msec 2023-07-12 22:18:14,242 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:14,243 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77. 2023-07-12 22:18:14,243 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 78285a4919386f8bfeb4ee3b1759ef77: 2023-07-12 22:18:14,244 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 78285a4919386f8bfeb4ee3b1759ef77 2023-07-12 22:18:14,245 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=78285a4919386f8bfeb4ee3b1759ef77, regionState=CLOSED 2023-07-12 22:18:14,245 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689200294245"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200294245"}]},"ts":"1689200294245"} 2023-07-12 22:18:14,248 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=70 2023-07-12 22:18:14,248 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=70, state=SUCCESS; CloseRegionProcedure 78285a4919386f8bfeb4ee3b1759ef77, server=jenkins-hbase4.apache.org,41059,1689200282965 in 200 msec 2023-07-12 22:18:14,250 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=69 2023-07-12 22:18:14,250 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78285a4919386f8bfeb4ee3b1759ef77, UNASSIGN in 214 msec 2023-07-12 22:18:14,251 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200294250"}]},"ts":"1689200294250"} 2023-07-12 22:18:14,252 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-12 22:18:14,255 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-12 22:18:14,257 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=69, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 231 msec 2023-07-12 22:18:14,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-12 22:18:14,331 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 69 completed 2023-07-12 22:18:14,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:14,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:14,347 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:14,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_48110529' 2023-07-12 22:18:14,348 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=80, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:14,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:14,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:14,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:14,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-12 22:18:14,366 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77 2023-07-12 22:18:14,366 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8 2023-07-12 22:18:14,366 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79 2023-07-12 22:18:14,366 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566 2023-07-12 22:18:14,366 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe 2023-07-12 22:18:14,370 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe/recovered.edits] 2023-07-12 22:18:14,370 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566/recovered.edits] 2023-07-12 22:18:14,370 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8/recovered.edits] 2023-07-12 22:18:14,371 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79/recovered.edits] 2023-07-12 22:18:14,371 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77/recovered.edits] 2023-07-12 22:18:14,386 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe/recovered.edits/4.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe/recovered.edits/4.seqid 2023-07-12 22:18:14,386 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8/recovered.edits/4.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8/recovered.edits/4.seqid 2023-07-12 22:18:14,387 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79/recovered.edits/4.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79/recovered.edits/4.seqid 2023-07-12 22:18:14,387 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bd8c5c3c37581d0b133274bac2d0ddbe 2023-07-12 22:18:14,387 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/506e4fc2457283b47ee998329cf531e8 2023-07-12 22:18:14,390 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566/recovered.edits/4.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566/recovered.edits/4.seqid 2023-07-12 22:18:14,394 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d623ba9fade16a6d6d97c3e6e3958d79 2023-07-12 22:18:14,394 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77/recovered.edits/4.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77/recovered.edits/4.seqid 2023-07-12 22:18:14,396 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ebab7213bb2188e10d3205e6f12bb566 2023-07-12 22:18:14,396 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78285a4919386f8bfeb4ee3b1759ef77 2023-07-12 22:18:14,396 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 22:18:14,400 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=80, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:14,415 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-12 22:18:14,419 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-12 22:18:14,422 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=80, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:14,422 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-12 22:18:14,422 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200294422"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:14,422 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200294422"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:14,422 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200294422"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:14,422 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200294422"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:14,422 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200294422"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:14,425 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 22:18:14,425 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 78285a4919386f8bfeb4ee3b1759ef77, NAME => 'Group_testTableMoveTruncateAndDrop,,1689200292958.78285a4919386f8bfeb4ee3b1759ef77.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => ebab7213bb2188e10d3205e6f12bb566, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689200292958.ebab7213bb2188e10d3205e6f12bb566.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => d623ba9fade16a6d6d97c3e6e3958d79, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689200292958.d623ba9fade16a6d6d97c3e6e3958d79.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 506e4fc2457283b47ee998329cf531e8, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689200292958.506e4fc2457283b47ee998329cf531e8.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => bd8c5c3c37581d0b133274bac2d0ddbe, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689200292958.bd8c5c3c37581d0b133274bac2d0ddbe.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 22:18:14,425 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-12 22:18:14,425 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689200294425"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:14,428 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-12 22:18:14,431 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=80, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 22:18:14,433 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=80, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 94 msec 2023-07-12 22:18:14,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-12 22:18:14,464 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 80 completed 2023-07-12 22:18:14,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:14,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:14,468 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41059] ipc.CallRunner(144): callId: 167 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:56094 deadline: 1689200354468, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41907 startCode=1689200287570. As of locationSeqNum=6. 2023-07-12 22:18:14,572 DEBUG [hconnection-0x32267dc-shared-pool-10] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:14,574 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34050, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:14,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:14,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:14,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:14,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:37441] to rsgroup default 2023-07-12 22:18:14,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:14,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:14,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:14,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_48110529, current retry=0 2023-07-12 22:18:14,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1689200282765, jenkins-hbase4.apache.org,41059,1689200282965] are moved back to Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:14,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_48110529 => default 2023-07-12 22:18:14,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:14,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_48110529 2023-07-12 22:18:14,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:14,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 22:18:14,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:14,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:14,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:14,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:14,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:14,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:14,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:14,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:14,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:14,624 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:14,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:14,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:14,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:14,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:14,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:14,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:14,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 147 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201494637, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:14,638 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:14,640 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:14,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,641 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:14,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:14,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:14,668 INFO [Listener at localhost/40739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=509 (was 425) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1322032218_17 at /127.0.0.1:41760 [Receiving block BP-807763544-172.31.14.131-1689200277053:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1420000849_17 at /127.0.0.1:41720 [Receiving block BP-807763544-172.31.14.131-1689200277053:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1322032218_17 at /127.0.0.1:59924 [Receiving block BP-807763544-172.31.14.131-1689200277053:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x32267dc-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2084246142-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59420@0x4ffe55ff-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp2084246142-641 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1984785911.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-807763544-172.31.14.131-1689200277053:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2084246142-642-acceptor-0@62c220fa-ServerConnector@4f2d7206{HTTP/1.1, (http/1.1)}{0.0.0.0:33587} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x32267dc-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1420000849_17 at /127.0.0.1:59882 [Receiving block BP-807763544-172.31.14.131-1689200277053:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105-prefix:jenkins-hbase4.apache.org,44439,1689200283155.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41907-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-73dd6b96-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp2084246142-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x32267dc-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105-prefix:jenkins-hbase4.apache.org,41907,1689200287570 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2084246142-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x32267dc-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-322406406_17 at /127.0.0.1:41786 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-807763544-172.31.14.131-1689200277053:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x32267dc-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41907 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1322032218_17 at /127.0.0.1:54014 [Receiving block BP-807763544-172.31.14.131-1689200277053:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-807763544-172.31.14.131-1689200277053:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:41907Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:40075 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2084246142-648 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-807763544-172.31.14.131-1689200277053:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59420@0x4ffe55ff-SendThread(127.0.0.1:59420) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1154619238_17 at /127.0.0.1:56268 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2084246142-646 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-807763544-172.31.14.131-1689200277053:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:40075 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-673939601_17 at /127.0.0.1:53982 [Waiting for operation #19] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x32267dc-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59420@0x4ffe55ff sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/508143200.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1420000849_17 at /127.0.0.1:53990 [Receiving block BP-807763544-172.31.14.131-1689200277053:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41907 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-807763544-172.31.14.131-1689200277053:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2084246142-647 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=815 (was 677) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=417 (was 375) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 176), AvailableMemoryMB=4400 (was 4722) 2023-07-12 22:18:14,669 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-12 22:18:14,687 INFO [Listener at localhost/40739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=509, OpenFileDescriptor=815, MaxFileDescriptor=60000, SystemLoadAverage=417, ProcessCount=176, AvailableMemoryMB=4399 2023-07-12 22:18:14,687 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-12 22:18:14,687 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-12 22:18:14,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:14,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:14,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:14,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:14,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:14,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:14,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:14,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:14,712 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:14,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:14,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:14,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:14,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:14,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:14,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:14,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 175 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201494733, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:14,734 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:14,736 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:14,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,739 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:14,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:14,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:14,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-12 22:18:14,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:14,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 181 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:45482 deadline: 1689201494741, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 22:18:14,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-12 22:18:14,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:14,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:45482 deadline: 1689201494742, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 22:18:14,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-12 22:18:14,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:14,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:45482 deadline: 1689201494744, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 22:18:14,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-12 22:18:14,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-12 22:18:14,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:14,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:14,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:14,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:14,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:14,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:14,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:14,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:14,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-12 22:18:14,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:14,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 22:18:14,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:14,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:14,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:14,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:14,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:14,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:14,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:14,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:14,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:14,798 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:14,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:14,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:14,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:14,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:14,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:14,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:14,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 219 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201494818, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:14,819 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:14,821 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:14,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,823 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:14,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:14,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:14,842 INFO [Listener at localhost/40739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=512 (was 509) Potentially hanging thread: hconnection-0x724df952-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=815 (was 815), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=417 (was 417), ProcessCount=176 (was 176), AvailableMemoryMB=4398 (was 4399) 2023-07-12 22:18:14,843 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-12 22:18:14,864 INFO [Listener at localhost/40739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=512, OpenFileDescriptor=815, MaxFileDescriptor=60000, SystemLoadAverage=417, ProcessCount=176, AvailableMemoryMB=4397 2023-07-12 22:18:14,864 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-12 22:18:14,864 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-12 22:18:14,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:14,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:14,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:14,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:14,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:14,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:14,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:14,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:14,883 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:14,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:14,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:14,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:14,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:14,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:14,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:14,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201494909, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:14,910 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:14,912 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:14,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,913 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:14,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:14,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:14,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:14,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:14,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-12 22:18:14,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 22:18:14,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:14,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:14,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:14,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:14,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:14,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:37441] to rsgroup bar 2023-07-12 22:18:14,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:14,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 22:18:14,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:14,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:14,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(238): Moving server region 5dadbef7ea97919927df58525570971d, which do not belong to RSGroup bar 2023-07-12 22:18:14,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=5dadbef7ea97919927df58525570971d, REOPEN/MOVE 2023-07-12 22:18:14,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 22:18:14,942 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=5dadbef7ea97919927df58525570971d, REOPEN/MOVE 2023-07-12 22:18:14,943 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=5dadbef7ea97919927df58525570971d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:14,943 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200294942"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200294942"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200294942"}]},"ts":"1689200294942"} 2023-07-12 22:18:14,944 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure 5dadbef7ea97919927df58525570971d, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:15,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:15,100 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5dadbef7ea97919927df58525570971d, disabling compactions & flushes 2023-07-12 22:18:15,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:15,100 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:15,100 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. after waiting 0 ms 2023-07-12 22:18:15,100 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:15,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-12 22:18:15,111 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:15,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5dadbef7ea97919927df58525570971d: 2023-07-12 22:18:15,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5dadbef7ea97919927df58525570971d move to jenkins-hbase4.apache.org,44439,1689200283155 record at close sequenceid=10 2023-07-12 22:18:15,115 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:15,116 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=5dadbef7ea97919927df58525570971d, regionState=CLOSED 2023-07-12 22:18:15,116 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200295116"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200295116"}]},"ts":"1689200295116"} 2023-07-12 22:18:15,121 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-12 22:18:15,121 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure 5dadbef7ea97919927df58525570971d, server=jenkins-hbase4.apache.org,41907,1689200287570 in 174 msec 2023-07-12 22:18:15,121 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=5dadbef7ea97919927df58525570971d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44439,1689200283155; forceNewPlan=false, retain=false 2023-07-12 22:18:15,272 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=5dadbef7ea97919927df58525570971d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:15,272 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200295272"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200295272"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200295272"}]},"ts":"1689200295272"} 2023-07-12 22:18:15,274 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure 5dadbef7ea97919927df58525570971d, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:15,430 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:15,431 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5dadbef7ea97919927df58525570971d, NAME => 'hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:15,431 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:15,431 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:15,431 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:15,431 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:15,433 INFO [StoreOpener-5dadbef7ea97919927df58525570971d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:15,434 DEBUG [StoreOpener-5dadbef7ea97919927df58525570971d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/info 2023-07-12 22:18:15,434 DEBUG [StoreOpener-5dadbef7ea97919927df58525570971d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/info 2023-07-12 22:18:15,435 INFO [StoreOpener-5dadbef7ea97919927df58525570971d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5dadbef7ea97919927df58525570971d columnFamilyName info 2023-07-12 22:18:15,442 DEBUG [StoreOpener-5dadbef7ea97919927df58525570971d-1] regionserver.HStore(539): loaded hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/info/0be1a286cac24ae4b40e2298b7fa2970 2023-07-12 22:18:15,442 INFO [StoreOpener-5dadbef7ea97919927df58525570971d-1] regionserver.HStore(310): Store=5dadbef7ea97919927df58525570971d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:15,443 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d 2023-07-12 22:18:15,444 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d 2023-07-12 22:18:15,448 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:15,449 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5dadbef7ea97919927df58525570971d; next sequenceid=13; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9689157760, jitterRate=-0.09762686491012573}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:15,449 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5dadbef7ea97919927df58525570971d: 2023-07-12 22:18:15,451 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d., pid=83, masterSystemTime=1689200295426 2023-07-12 22:18:15,452 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:15,452 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:15,453 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=5dadbef7ea97919927df58525570971d, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:15,453 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200295453"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200295453"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200295453"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200295453"}]},"ts":"1689200295453"} 2023-07-12 22:18:15,457 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-12 22:18:15,457 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure 5dadbef7ea97919927df58525570971d, server=jenkins-hbase4.apache.org,44439,1689200283155 in 181 msec 2023-07-12 22:18:15,459 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=5dadbef7ea97919927df58525570971d, REOPEN/MOVE in 518 msec 2023-07-12 22:18:15,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-12 22:18:15,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1689200282765, jenkins-hbase4.apache.org,41059,1689200282965, jenkins-hbase4.apache.org,41907,1689200287570] are moved back to default 2023-07-12 22:18:15,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-12 22:18:15,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:15,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:15,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:15,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-12 22:18:15,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:15,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:15,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-12 22:18:15,956 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:15,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 84 2023-07-12 22:18:15,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-12 22:18:15,958 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:15,959 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 22:18:15,959 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:15,960 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:15,963 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:15,969 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:15,970 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3 empty. 2023-07-12 22:18:15,970 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:15,971 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-12 22:18:15,991 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:15,993 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => ab02bb26d3705cb4669f804602a882d3, NAME => 'Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:16,007 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:16,008 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing ab02bb26d3705cb4669f804602a882d3, disabling compactions & flushes 2023-07-12 22:18:16,008 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:16,008 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:16,008 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. after waiting 0 ms 2023-07-12 22:18:16,008 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:16,008 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:16,008 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for ab02bb26d3705cb4669f804602a882d3: 2023-07-12 22:18:16,015 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:16,016 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689200296016"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200296016"}]},"ts":"1689200296016"} 2023-07-12 22:18:16,018 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:16,020 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:16,021 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200296020"}]},"ts":"1689200296020"} 2023-07-12 22:18:16,022 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-12 22:18:16,031 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, ASSIGN}] 2023-07-12 22:18:16,033 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, ASSIGN 2023-07-12 22:18:16,034 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44439,1689200283155; forceNewPlan=false, retain=false 2023-07-12 22:18:16,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-12 22:18:16,185 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=ab02bb26d3705cb4669f804602a882d3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:16,186 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689200296185"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200296185"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200296185"}]},"ts":"1689200296185"} 2023-07-12 22:18:16,188 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; OpenRegionProcedure ab02bb26d3705cb4669f804602a882d3, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:16,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-12 22:18:16,344 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:16,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ab02bb26d3705cb4669f804602a882d3, NAME => 'Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:16,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:16,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:16,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:16,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:16,346 INFO [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:16,348 DEBUG [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/f 2023-07-12 22:18:16,348 DEBUG [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/f 2023-07-12 22:18:16,348 INFO [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ab02bb26d3705cb4669f804602a882d3 columnFamilyName f 2023-07-12 22:18:16,349 INFO [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] regionserver.HStore(310): Store=ab02bb26d3705cb4669f804602a882d3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:16,350 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:16,350 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:16,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:16,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:16,356 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ab02bb26d3705cb4669f804602a882d3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11014151520, jitterRate=0.02577279508113861}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:16,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ab02bb26d3705cb4669f804602a882d3: 2023-07-12 22:18:16,357 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3., pid=86, masterSystemTime=1689200296339 2023-07-12 22:18:16,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:16,359 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:16,359 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=ab02bb26d3705cb4669f804602a882d3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:16,359 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689200296359"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200296359"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200296359"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200296359"}]},"ts":"1689200296359"} 2023-07-12 22:18:16,363 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-12 22:18:16,363 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; OpenRegionProcedure ab02bb26d3705cb4669f804602a882d3, server=jenkins-hbase4.apache.org,44439,1689200283155 in 173 msec 2023-07-12 22:18:16,366 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-12 22:18:16,366 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, ASSIGN in 333 msec 2023-07-12 22:18:16,366 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:16,367 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200296366"}]},"ts":"1689200296366"} 2023-07-12 22:18:16,368 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-12 22:18:16,370 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:16,373 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 419 msec 2023-07-12 22:18:16,389 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 22:18:16,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-12 22:18:16,562 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-12 22:18:16,562 DEBUG [Listener at localhost/40739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-12 22:18:16,562 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:16,568 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-12 22:18:16,569 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:16,569 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-12 22:18:16,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-12 22:18:16,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:16,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 22:18:16,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:16,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:16,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-12 22:18:16,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(345): Moving region ab02bb26d3705cb4669f804602a882d3 to RSGroup bar 2023-07-12 22:18:16,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:16,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:16,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:16,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:16,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 22:18:16,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:16,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, REOPEN/MOVE 2023-07-12 22:18:16,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-12 22:18:16,580 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, REOPEN/MOVE 2023-07-12 22:18:16,583 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ab02bb26d3705cb4669f804602a882d3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:16,583 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689200296583"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200296583"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200296583"}]},"ts":"1689200296583"} 2023-07-12 22:18:16,587 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure ab02bb26d3705cb4669f804602a882d3, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:16,741 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:16,745 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ab02bb26d3705cb4669f804602a882d3, disabling compactions & flushes 2023-07-12 22:18:16,745 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:16,745 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:16,745 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. after waiting 0 ms 2023-07-12 22:18:16,745 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:16,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:16,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:16,751 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ab02bb26d3705cb4669f804602a882d3: 2023-07-12 22:18:16,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ab02bb26d3705cb4669f804602a882d3 move to jenkins-hbase4.apache.org,37441,1689200282765 record at close sequenceid=2 2023-07-12 22:18:16,753 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:16,754 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ab02bb26d3705cb4669f804602a882d3, regionState=CLOSED 2023-07-12 22:18:16,754 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689200296754"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200296754"}]},"ts":"1689200296754"} 2023-07-12 22:18:16,758 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-12 22:18:16,758 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure ab02bb26d3705cb4669f804602a882d3, server=jenkins-hbase4.apache.org,44439,1689200283155 in 172 msec 2023-07-12 22:18:16,759 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37441,1689200282765; forceNewPlan=false, retain=false 2023-07-12 22:18:16,910 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:16,910 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ab02bb26d3705cb4669f804602a882d3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:16,910 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689200296910"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200296910"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200296910"}]},"ts":"1689200296910"} 2023-07-12 22:18:16,912 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure ab02bb26d3705cb4669f804602a882d3, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:17,069 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:17,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ab02bb26d3705cb4669f804602a882d3, NAME => 'Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:17,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:17,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:17,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:17,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:17,075 INFO [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:17,077 DEBUG [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/f 2023-07-12 22:18:17,077 DEBUG [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/f 2023-07-12 22:18:17,077 INFO [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ab02bb26d3705cb4669f804602a882d3 columnFamilyName f 2023-07-12 22:18:17,078 INFO [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] regionserver.HStore(310): Store=ab02bb26d3705cb4669f804602a882d3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:17,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:17,082 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:17,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:17,094 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ab02bb26d3705cb4669f804602a882d3; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12002820960, jitterRate=0.11784981191158295}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:17,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ab02bb26d3705cb4669f804602a882d3: 2023-07-12 22:18:17,096 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3., pid=89, masterSystemTime=1689200297064 2023-07-12 22:18:17,099 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:17,099 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:17,100 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ab02bb26d3705cb4669f804602a882d3, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:17,100 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689200297100"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200297100"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200297100"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200297100"}]},"ts":"1689200297100"} 2023-07-12 22:18:17,104 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-12 22:18:17,104 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure ab02bb26d3705cb4669f804602a882d3, server=jenkins-hbase4.apache.org,37441,1689200282765 in 190 msec 2023-07-12 22:18:17,107 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, REOPEN/MOVE in 526 msec 2023-07-12 22:18:17,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-12 22:18:17,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-12 22:18:17,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:17,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:17,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:17,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-12 22:18:17,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:17,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-12 22:18:17,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:17,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 285 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:45482 deadline: 1689201497587, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-12 22:18:17,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:37441] to rsgroup default 2023-07-12 22:18:17,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:17,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:45482 deadline: 1689201497588, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-12 22:18:17,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-12 22:18:17,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:17,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 22:18:17,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:17,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:17,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-12 22:18:17,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(345): Moving region ab02bb26d3705cb4669f804602a882d3 to RSGroup default 2023-07-12 22:18:17,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, REOPEN/MOVE 2023-07-12 22:18:17,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 22:18:17,599 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, REOPEN/MOVE 2023-07-12 22:18:17,599 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=ab02bb26d3705cb4669f804602a882d3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:17,599 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689200297599"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200297599"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200297599"}]},"ts":"1689200297599"} 2023-07-12 22:18:17,601 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; CloseRegionProcedure ab02bb26d3705cb4669f804602a882d3, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:17,755 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:17,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ab02bb26d3705cb4669f804602a882d3, disabling compactions & flushes 2023-07-12 22:18:17,756 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:17,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:17,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. after waiting 0 ms 2023-07-12 22:18:17,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:17,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 22:18:17,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:17,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ab02bb26d3705cb4669f804602a882d3: 2023-07-12 22:18:17,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ab02bb26d3705cb4669f804602a882d3 move to jenkins-hbase4.apache.org,44439,1689200283155 record at close sequenceid=5 2023-07-12 22:18:17,764 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:17,764 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=ab02bb26d3705cb4669f804602a882d3, regionState=CLOSED 2023-07-12 22:18:17,764 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689200297764"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200297764"}]},"ts":"1689200297764"} 2023-07-12 22:18:17,768 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-12 22:18:17,768 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; CloseRegionProcedure ab02bb26d3705cb4669f804602a882d3, server=jenkins-hbase4.apache.org,37441,1689200282765 in 165 msec 2023-07-12 22:18:17,768 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44439,1689200283155; forceNewPlan=false, retain=false 2023-07-12 22:18:17,919 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=ab02bb26d3705cb4669f804602a882d3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:17,919 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689200297919"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200297919"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200297919"}]},"ts":"1689200297919"} 2023-07-12 22:18:17,921 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=90, state=RUNNABLE; OpenRegionProcedure ab02bb26d3705cb4669f804602a882d3, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:18,077 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:18,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ab02bb26d3705cb4669f804602a882d3, NAME => 'Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:18,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:18,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:18,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:18,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:18,080 INFO [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:18,082 DEBUG [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/f 2023-07-12 22:18:18,082 DEBUG [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/f 2023-07-12 22:18:18,083 INFO [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ab02bb26d3705cb4669f804602a882d3 columnFamilyName f 2023-07-12 22:18:18,084 INFO [StoreOpener-ab02bb26d3705cb4669f804602a882d3-1] regionserver.HStore(310): Store=ab02bb26d3705cb4669f804602a882d3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:18,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:18,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:18,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:18,095 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ab02bb26d3705cb4669f804602a882d3; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11992168320, jitterRate=0.11685770750045776}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:18,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ab02bb26d3705cb4669f804602a882d3: 2023-07-12 22:18:18,096 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3., pid=92, masterSystemTime=1689200298072 2023-07-12 22:18:18,098 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:18,099 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:18,099 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=ab02bb26d3705cb4669f804602a882d3, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:18,099 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689200298099"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200298099"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200298099"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200298099"}]},"ts":"1689200298099"} 2023-07-12 22:18:18,106 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=90 2023-07-12 22:18:18,106 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=90, state=SUCCESS; OpenRegionProcedure ab02bb26d3705cb4669f804602a882d3, server=jenkins-hbase4.apache.org,44439,1689200283155 in 180 msec 2023-07-12 22:18:18,109 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, REOPEN/MOVE in 509 msec 2023-07-12 22:18:18,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure.ProcedureSyncWait(216): waitFor pid=90 2023-07-12 22:18:18,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-12 22:18:18,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:18,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:18,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:18,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-12 22:18:18,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:18,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 294 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:45482 deadline: 1689201498605, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-12 22:18:18,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:37441] to rsgroup default 2023-07-12 22:18:18,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:18,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 22:18:18,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:18,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:18,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-12 22:18:18,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1689200282765, jenkins-hbase4.apache.org,41059,1689200282965, jenkins-hbase4.apache.org,41907,1689200287570] are moved back to bar 2023-07-12 22:18:18,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-12 22:18:18,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:18,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:18,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:18,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-12 22:18:18,617 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41907] ipc.CallRunner(144): callId: 222 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:34050 deadline: 1689200358617, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44439 startCode=1689200283155. As of locationSeqNum=10. 2023-07-12 22:18:18,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:18,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:18,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 22:18:18,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:18,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:18,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:18,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:18,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:18,736 INFO [Listener at localhost/40739] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-12 22:18:18,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-12 22:18:18,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-12 22:18:18,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-12 22:18:18,740 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200298740"}]},"ts":"1689200298740"} 2023-07-12 22:18:18,741 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-12 22:18:18,744 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-12 22:18:18,745 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, UNASSIGN}] 2023-07-12 22:18:18,746 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, UNASSIGN 2023-07-12 22:18:18,747 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=ab02bb26d3705cb4669f804602a882d3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:18,747 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689200298747"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200298747"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200298747"}]},"ts":"1689200298747"} 2023-07-12 22:18:18,748 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE; CloseRegionProcedure ab02bb26d3705cb4669f804602a882d3, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:18,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-12 22:18:18,902 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:18,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ab02bb26d3705cb4669f804602a882d3, disabling compactions & flushes 2023-07-12 22:18:18,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:18,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:18,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. after waiting 0 ms 2023-07-12 22:18:18,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:18,907 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 22:18:18,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3. 2023-07-12 22:18:18,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ab02bb26d3705cb4669f804602a882d3: 2023-07-12 22:18:18,909 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:18,909 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=ab02bb26d3705cb4669f804602a882d3, regionState=CLOSED 2023-07-12 22:18:18,910 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689200298909"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200298909"}]},"ts":"1689200298909"} 2023-07-12 22:18:18,913 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-12 22:18:18,913 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; CloseRegionProcedure ab02bb26d3705cb4669f804602a882d3, server=jenkins-hbase4.apache.org,44439,1689200283155 in 163 msec 2023-07-12 22:18:18,914 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-12 22:18:18,914 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ab02bb26d3705cb4669f804602a882d3, UNASSIGN in 168 msec 2023-07-12 22:18:18,915 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200298915"}]},"ts":"1689200298915"} 2023-07-12 22:18:18,916 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-12 22:18:18,918 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-12 22:18:18,923 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 183 msec 2023-07-12 22:18:19,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-12 22:18:19,042 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-12 22:18:19,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-12 22:18:19,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 22:18:19,046 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 22:18:19,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-12 22:18:19,047 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=96, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 22:18:19,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:19,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:19,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:19,051 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:19,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-12 22:18:19,054 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/recovered.edits] 2023-07-12 22:18:19,060 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/recovered.edits/10.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3/recovered.edits/10.seqid 2023-07-12 22:18:19,060 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testFailRemoveGroup/ab02bb26d3705cb4669f804602a882d3 2023-07-12 22:18:19,060 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-12 22:18:19,063 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=96, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 22:18:19,066 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-12 22:18:19,068 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-12 22:18:19,069 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=96, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 22:18:19,069 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-12 22:18:19,069 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200299069"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:19,071 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 22:18:19,071 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ab02bb26d3705cb4669f804602a882d3, NAME => 'Group_testFailRemoveGroup,,1689200295952.ab02bb26d3705cb4669f804602a882d3.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 22:18:19,071 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-12 22:18:19,071 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689200299071"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:19,076 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-12 22:18:19,078 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=96, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 22:18:19,080 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 35 msec 2023-07-12 22:18:19,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-12 22:18:19,154 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 96 completed 2023-07-12 22:18:19,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:19,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:19,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:19,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:19,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:19,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:19,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:19,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:19,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:19,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:19,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:19,176 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:19,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:19,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:19,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:19,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:19,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:19,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:19,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:19,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:19,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:19,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 342 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201499189, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:19,190 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:19,192 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:19,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:19,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:19,193 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:19,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:19,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:19,216 INFO [Listener at localhost/40739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=513 (was 512) Potentially hanging thread: hconnection-0x32267dc-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b822857-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1609034275_17 at /127.0.0.1:41786 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x32267dc-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x32267dc-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1609034275_17 at /127.0.0.1:49472 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x32267dc-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1322032218_17 at /127.0.0.1:56268 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x32267dc-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x32267dc-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=814 (was 815), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=416 (was 417), ProcessCount=176 (was 176), AvailableMemoryMB=4362 (was 4397) 2023-07-12 22:18:19,216 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-12 22:18:19,235 INFO [Listener at localhost/40739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=513, OpenFileDescriptor=814, MaxFileDescriptor=60000, SystemLoadAverage=416, ProcessCount=176, AvailableMemoryMB=4361 2023-07-12 22:18:19,235 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-12 22:18:19,235 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-12 22:18:19,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:19,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:19,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:19,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:19,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:19,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:19,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:19,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:19,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:19,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:19,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:19,253 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:19,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:19,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:19,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:19,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:19,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:19,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:19,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:19,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:19,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:19,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201499272, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:19,273 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:19,277 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:19,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:19,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:19,279 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:19,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:19,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:19,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:19,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:19,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_696227763 2023-07-12 22:18:19,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_696227763 2023-07-12 22:18:19,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:19,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:19,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:19,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:19,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:19,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:19,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37441] to rsgroup Group_testMultiTableMove_696227763 2023-07-12 22:18:19,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_696227763 2023-07-12 22:18:19,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:19,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:19,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:19,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 22:18:19,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1689200282765] are moved back to default 2023-07-12 22:18:19,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_696227763 2023-07-12 22:18:19,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:19,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:19,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:19,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_696227763 2023-07-12 22:18:19,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:19,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:19,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 22:18:19,314 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:19,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 97 2023-07-12 22:18:19,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-12 22:18:19,316 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_696227763 2023-07-12 22:18:19,316 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:19,316 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:19,317 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:19,325 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:19,327 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:19,327 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb empty. 2023-07-12 22:18:19,328 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:19,328 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-12 22:18:19,342 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:19,343 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 335d69b6e510f48cce9fd58b561debdb, NAME => 'GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:19,354 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:19,354 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 335d69b6e510f48cce9fd58b561debdb, disabling compactions & flushes 2023-07-12 22:18:19,354 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:19,354 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:19,354 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. after waiting 0 ms 2023-07-12 22:18:19,354 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:19,354 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:19,354 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 335d69b6e510f48cce9fd58b561debdb: 2023-07-12 22:18:19,356 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:19,357 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200299357"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200299357"}]},"ts":"1689200299357"} 2023-07-12 22:18:19,358 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:19,359 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:19,359 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200299359"}]},"ts":"1689200299359"} 2023-07-12 22:18:19,361 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-12 22:18:19,367 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:19,367 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:19,367 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:19,368 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:19,368 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:19,368 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=335d69b6e510f48cce9fd58b561debdb, ASSIGN}] 2023-07-12 22:18:19,370 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=335d69b6e510f48cce9fd58b561debdb, ASSIGN 2023-07-12 22:18:19,371 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=335d69b6e510f48cce9fd58b561debdb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44439,1689200283155; forceNewPlan=false, retain=false 2023-07-12 22:18:19,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-12 22:18:19,521 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:19,522 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=335d69b6e510f48cce9fd58b561debdb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:19,523 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200299522"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200299522"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200299522"}]},"ts":"1689200299522"} 2023-07-12 22:18:19,524 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 335d69b6e510f48cce9fd58b561debdb, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:19,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-12 22:18:19,680 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:19,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 335d69b6e510f48cce9fd58b561debdb, NAME => 'GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:19,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:19,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:19,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:19,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:19,683 INFO [StoreOpener-335d69b6e510f48cce9fd58b561debdb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:19,685 DEBUG [StoreOpener-335d69b6e510f48cce9fd58b561debdb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb/f 2023-07-12 22:18:19,685 DEBUG [StoreOpener-335d69b6e510f48cce9fd58b561debdb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb/f 2023-07-12 22:18:19,685 INFO [StoreOpener-335d69b6e510f48cce9fd58b561debdb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 335d69b6e510f48cce9fd58b561debdb columnFamilyName f 2023-07-12 22:18:19,686 INFO [StoreOpener-335d69b6e510f48cce9fd58b561debdb-1] regionserver.HStore(310): Store=335d69b6e510f48cce9fd58b561debdb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:19,687 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:19,687 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:19,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:19,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:19,692 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 335d69b6e510f48cce9fd58b561debdb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11625256160, jitterRate=0.08268634974956512}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:19,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 335d69b6e510f48cce9fd58b561debdb: 2023-07-12 22:18:19,693 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb., pid=99, masterSystemTime=1689200299676 2023-07-12 22:18:19,695 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:19,695 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:19,695 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=335d69b6e510f48cce9fd58b561debdb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:19,695 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200299695"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200299695"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200299695"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200299695"}]},"ts":"1689200299695"} 2023-07-12 22:18:19,698 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-12 22:18:19,698 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 335d69b6e510f48cce9fd58b561debdb, server=jenkins-hbase4.apache.org,44439,1689200283155 in 173 msec 2023-07-12 22:18:19,700 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-12 22:18:19,700 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=335d69b6e510f48cce9fd58b561debdb, ASSIGN in 330 msec 2023-07-12 22:18:19,701 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:19,701 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200299701"}]},"ts":"1689200299701"} 2023-07-12 22:18:19,703 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-12 22:18:19,708 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:19,709 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 397 msec 2023-07-12 22:18:19,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-12 22:18:19,918 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 97 completed 2023-07-12 22:18:19,918 DEBUG [Listener at localhost/40739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-12 22:18:19,918 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:19,924 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-12 22:18:19,924 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:19,924 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-12 22:18:19,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:19,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 22:18:19,930 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:19,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 100 2023-07-12 22:18:19,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 22:18:19,933 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_696227763 2023-07-12 22:18:19,933 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:19,934 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:19,934 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:19,937 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:19,939 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:19,939 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7 empty. 2023-07-12 22:18:19,940 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:19,940 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-12 22:18:19,956 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:19,957 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 824bf3f18fbe15c1f3bd0996b30df2e7, NAME => 'GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:19,968 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:19,968 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 824bf3f18fbe15c1f3bd0996b30df2e7, disabling compactions & flushes 2023-07-12 22:18:19,968 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:19,968 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:19,968 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. after waiting 0 ms 2023-07-12 22:18:19,968 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:19,969 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:19,969 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 824bf3f18fbe15c1f3bd0996b30df2e7: 2023-07-12 22:18:19,975 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:19,976 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200299976"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200299976"}]},"ts":"1689200299976"} 2023-07-12 22:18:19,977 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:19,978 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:19,978 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200299978"}]},"ts":"1689200299978"} 2023-07-12 22:18:19,980 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-12 22:18:19,984 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:19,985 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:19,985 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:19,985 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:19,985 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:19,985 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=824bf3f18fbe15c1f3bd0996b30df2e7, ASSIGN}] 2023-07-12 22:18:19,987 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=824bf3f18fbe15c1f3bd0996b30df2e7, ASSIGN 2023-07-12 22:18:19,991 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=824bf3f18fbe15c1f3bd0996b30df2e7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41059,1689200282965; forceNewPlan=false, retain=false 2023-07-12 22:18:20,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 22:18:20,141 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:20,143 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=824bf3f18fbe15c1f3bd0996b30df2e7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:20,143 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200300143"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200300143"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200300143"}]},"ts":"1689200300143"} 2023-07-12 22:18:20,144 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure 824bf3f18fbe15c1f3bd0996b30df2e7, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:20,350 INFO [AsyncFSWAL-0-hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData-prefix:jenkins-hbase4.apache.org,34283,1689200280641] wal.AbstractFSWAL(1141): Slow sync cost: 205 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43679,DS-fd1a0edc-c02a-4632-9d42-b6a8dfc38c34,DISK], DatanodeInfoWithStorage[127.0.0.1:33045,DS-44a300e1-bae4-42cc-9ad0-7dfbbaccc2e0,DISK], DatanodeInfoWithStorage[127.0.0.1:46197,DS-1de8e13e-6649-4d05-8631-84ff0e590406,DISK]] 2023-07-12 22:18:20,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 22:18:20,508 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:20,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 824bf3f18fbe15c1f3bd0996b30df2e7, NAME => 'GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:20,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:20,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:20,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:20,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:20,510 INFO [StoreOpener-824bf3f18fbe15c1f3bd0996b30df2e7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:20,512 DEBUG [StoreOpener-824bf3f18fbe15c1f3bd0996b30df2e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7/f 2023-07-12 22:18:20,512 DEBUG [StoreOpener-824bf3f18fbe15c1f3bd0996b30df2e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7/f 2023-07-12 22:18:20,512 INFO [StoreOpener-824bf3f18fbe15c1f3bd0996b30df2e7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 824bf3f18fbe15c1f3bd0996b30df2e7 columnFamilyName f 2023-07-12 22:18:20,513 INFO [StoreOpener-824bf3f18fbe15c1f3bd0996b30df2e7-1] regionserver.HStore(310): Store=824bf3f18fbe15c1f3bd0996b30df2e7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:20,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:20,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:20,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:20,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:20,520 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 824bf3f18fbe15c1f3bd0996b30df2e7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9895576000, jitterRate=-0.07840266823768616}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:20,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 824bf3f18fbe15c1f3bd0996b30df2e7: 2023-07-12 22:18:20,535 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7., pid=102, masterSystemTime=1689200300502 2023-07-12 22:18:20,536 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:20,536 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:20,537 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=824bf3f18fbe15c1f3bd0996b30df2e7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:20,537 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200300537"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200300537"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200300537"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200300537"}]},"ts":"1689200300537"} 2023-07-12 22:18:20,540 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-12 22:18:20,540 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure 824bf3f18fbe15c1f3bd0996b30df2e7, server=jenkins-hbase4.apache.org,41059,1689200282965 in 395 msec 2023-07-12 22:18:20,556 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-12 22:18:20,556 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=824bf3f18fbe15c1f3bd0996b30df2e7, ASSIGN in 555 msec 2023-07-12 22:18:20,560 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:20,561 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200300561"}]},"ts":"1689200300561"} 2023-07-12 22:18:20,563 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-12 22:18:20,566 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:20,569 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 640 msec 2023-07-12 22:18:20,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 22:18:20,653 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 100 completed 2023-07-12 22:18:20,653 DEBUG [Listener at localhost/40739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-12 22:18:20,654 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:20,679 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-12 22:18:20,679 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:20,679 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-12 22:18:20,680 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:20,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-12 22:18:20,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:20,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-12 22:18:20,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:20,693 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_696227763 2023-07-12 22:18:20,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_696227763 2023-07-12 22:18:20,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_696227763 2023-07-12 22:18:20,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:20,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:20,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:20,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_696227763 2023-07-12 22:18:20,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(345): Moving region 824bf3f18fbe15c1f3bd0996b30df2e7 to RSGroup Group_testMultiTableMove_696227763 2023-07-12 22:18:20,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=824bf3f18fbe15c1f3bd0996b30df2e7, REOPEN/MOVE 2023-07-12 22:18:20,713 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=824bf3f18fbe15c1f3bd0996b30df2e7, REOPEN/MOVE 2023-07-12 22:18:20,714 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=824bf3f18fbe15c1f3bd0996b30df2e7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:20,714 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200300714"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200300714"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200300714"}]},"ts":"1689200300714"} 2023-07-12 22:18:20,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_696227763 2023-07-12 22:18:20,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(345): Moving region 335d69b6e510f48cce9fd58b561debdb to RSGroup Group_testMultiTableMove_696227763 2023-07-12 22:18:20,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=335d69b6e510f48cce9fd58b561debdb, REOPEN/MOVE 2023-07-12 22:18:20,719 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=335d69b6e510f48cce9fd58b561debdb, REOPEN/MOVE 2023-07-12 22:18:20,720 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=335d69b6e510f48cce9fd58b561debdb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:20,720 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200300720"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200300720"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200300720"}]},"ts":"1689200300720"} 2023-07-12 22:18:20,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_696227763, current retry=0 2023-07-12 22:18:20,724 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=103, state=RUNNABLE; CloseRegionProcedure 824bf3f18fbe15c1f3bd0996b30df2e7, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:20,726 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=104, state=RUNNABLE; CloseRegionProcedure 335d69b6e510f48cce9fd58b561debdb, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:20,877 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:20,879 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 824bf3f18fbe15c1f3bd0996b30df2e7, disabling compactions & flushes 2023-07-12 22:18:20,879 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:20,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:20,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. after waiting 0 ms 2023-07-12 22:18:20,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:20,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:20,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 335d69b6e510f48cce9fd58b561debdb, disabling compactions & flushes 2023-07-12 22:18:20,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:20,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:20,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. after waiting 0 ms 2023-07-12 22:18:20,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:20,899 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:20,899 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:20,900 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:20,900 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 335d69b6e510f48cce9fd58b561debdb: 2023-07-12 22:18:20,900 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 335d69b6e510f48cce9fd58b561debdb move to jenkins-hbase4.apache.org,37441,1689200282765 record at close sequenceid=2 2023-07-12 22:18:20,900 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:20,901 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 824bf3f18fbe15c1f3bd0996b30df2e7: 2023-07-12 22:18:20,901 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 824bf3f18fbe15c1f3bd0996b30df2e7 move to jenkins-hbase4.apache.org,37441,1689200282765 record at close sequenceid=2 2023-07-12 22:18:20,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:20,908 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=335d69b6e510f48cce9fd58b561debdb, regionState=CLOSED 2023-07-12 22:18:20,909 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200300908"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200300908"}]},"ts":"1689200300908"} 2023-07-12 22:18:20,911 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:20,913 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=824bf3f18fbe15c1f3bd0996b30df2e7, regionState=CLOSED 2023-07-12 22:18:20,913 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200300913"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200300913"}]},"ts":"1689200300913"} 2023-07-12 22:18:20,924 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=104 2023-07-12 22:18:20,925 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=104, state=SUCCESS; CloseRegionProcedure 335d69b6e510f48cce9fd58b561debdb, server=jenkins-hbase4.apache.org,44439,1689200283155 in 188 msec 2023-07-12 22:18:20,925 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=103 2023-07-12 22:18:20,925 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=103, state=SUCCESS; CloseRegionProcedure 824bf3f18fbe15c1f3bd0996b30df2e7, server=jenkins-hbase4.apache.org,41059,1689200282965 in 191 msec 2023-07-12 22:18:20,926 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=335d69b6e510f48cce9fd58b561debdb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37441,1689200282765; forceNewPlan=false, retain=false 2023-07-12 22:18:20,926 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=824bf3f18fbe15c1f3bd0996b30df2e7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37441,1689200282765; forceNewPlan=false, retain=false 2023-07-12 22:18:21,076 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=824bf3f18fbe15c1f3bd0996b30df2e7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:21,076 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=335d69b6e510f48cce9fd58b561debdb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:21,077 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200301076"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200301076"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200301076"}]},"ts":"1689200301076"} 2023-07-12 22:18:21,077 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200301076"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200301076"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200301076"}]},"ts":"1689200301076"} 2023-07-12 22:18:21,078 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=103, state=RUNNABLE; OpenRegionProcedure 824bf3f18fbe15c1f3bd0996b30df2e7, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:21,079 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=104, state=RUNNABLE; OpenRegionProcedure 335d69b6e510f48cce9fd58b561debdb, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:21,234 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:21,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 824bf3f18fbe15c1f3bd0996b30df2e7, NAME => 'GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:21,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:21,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:21,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:21,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:21,236 INFO [StoreOpener-824bf3f18fbe15c1f3bd0996b30df2e7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:21,237 DEBUG [StoreOpener-824bf3f18fbe15c1f3bd0996b30df2e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7/f 2023-07-12 22:18:21,237 DEBUG [StoreOpener-824bf3f18fbe15c1f3bd0996b30df2e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7/f 2023-07-12 22:18:21,238 INFO [StoreOpener-824bf3f18fbe15c1f3bd0996b30df2e7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 824bf3f18fbe15c1f3bd0996b30df2e7 columnFamilyName f 2023-07-12 22:18:21,238 INFO [StoreOpener-824bf3f18fbe15c1f3bd0996b30df2e7-1] regionserver.HStore(310): Store=824bf3f18fbe15c1f3bd0996b30df2e7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:21,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:21,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:21,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:21,244 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 824bf3f18fbe15c1f3bd0996b30df2e7; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11462083520, jitterRate=0.06748971343040466}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:21,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 824bf3f18fbe15c1f3bd0996b30df2e7: 2023-07-12 22:18:21,245 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7., pid=107, masterSystemTime=1689200301230 2023-07-12 22:18:21,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:21,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:21,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:21,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 335d69b6e510f48cce9fd58b561debdb, NAME => 'GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:21,247 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=824bf3f18fbe15c1f3bd0996b30df2e7, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:21,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:21,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:21,247 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200301247"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200301247"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200301247"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200301247"}]},"ts":"1689200301247"} 2023-07-12 22:18:21,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:21,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:21,248 INFO [StoreOpener-335d69b6e510f48cce9fd58b561debdb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:21,249 DEBUG [StoreOpener-335d69b6e510f48cce9fd58b561debdb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb/f 2023-07-12 22:18:21,249 DEBUG [StoreOpener-335d69b6e510f48cce9fd58b561debdb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb/f 2023-07-12 22:18:21,250 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=103 2023-07-12 22:18:21,250 INFO [StoreOpener-335d69b6e510f48cce9fd58b561debdb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 335d69b6e510f48cce9fd58b561debdb columnFamilyName f 2023-07-12 22:18:21,250 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=103, state=SUCCESS; OpenRegionProcedure 824bf3f18fbe15c1f3bd0996b30df2e7, server=jenkins-hbase4.apache.org,37441,1689200282765 in 170 msec 2023-07-12 22:18:21,250 INFO [StoreOpener-335d69b6e510f48cce9fd58b561debdb-1] regionserver.HStore(310): Store=335d69b6e510f48cce9fd58b561debdb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:21,251 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=824bf3f18fbe15c1f3bd0996b30df2e7, REOPEN/MOVE in 540 msec 2023-07-12 22:18:21,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:21,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:21,255 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:21,256 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 335d69b6e510f48cce9fd58b561debdb; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11464569920, jitterRate=0.06772127747535706}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:21,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 335d69b6e510f48cce9fd58b561debdb: 2023-07-12 22:18:21,256 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb., pid=108, masterSystemTime=1689200301230 2023-07-12 22:18:21,258 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:21,258 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:21,258 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=335d69b6e510f48cce9fd58b561debdb, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:21,258 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200301258"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200301258"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200301258"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200301258"}]},"ts":"1689200301258"} 2023-07-12 22:18:21,261 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=104 2023-07-12 22:18:21,261 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=104, state=SUCCESS; OpenRegionProcedure 335d69b6e510f48cce9fd58b561debdb, server=jenkins-hbase4.apache.org,37441,1689200282765 in 180 msec 2023-07-12 22:18:21,262 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=335d69b6e510f48cce9fd58b561debdb, REOPEN/MOVE in 546 msec 2023-07-12 22:18:21,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure.ProcedureSyncWait(216): waitFor pid=103 2023-07-12 22:18:21,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_696227763. 2023-07-12 22:18:21,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:21,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:21,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:21,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-12 22:18:21,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:21,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-12 22:18:21,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:21,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:21,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:21,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_696227763 2023-07-12 22:18:21,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:21,734 INFO [Listener at localhost/40739] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-12 22:18:21,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-12 22:18:21,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 22:18:21,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-12 22:18:21,738 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200301738"}]},"ts":"1689200301738"} 2023-07-12 22:18:21,740 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-12 22:18:21,742 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-12 22:18:21,742 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=335d69b6e510f48cce9fd58b561debdb, UNASSIGN}] 2023-07-12 22:18:21,744 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=335d69b6e510f48cce9fd58b561debdb, UNASSIGN 2023-07-12 22:18:21,744 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=335d69b6e510f48cce9fd58b561debdb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:21,745 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200301744"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200301744"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200301744"}]},"ts":"1689200301744"} 2023-07-12 22:18:21,746 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE; CloseRegionProcedure 335d69b6e510f48cce9fd58b561debdb, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:21,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-12 22:18:21,846 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 22:18:21,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:21,899 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 335d69b6e510f48cce9fd58b561debdb, disabling compactions & flushes 2023-07-12 22:18:21,899 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:21,899 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:21,899 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. after waiting 0 ms 2023-07-12 22:18:21,899 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:21,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 22:18:21,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb. 2023-07-12 22:18:21,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 335d69b6e510f48cce9fd58b561debdb: 2023-07-12 22:18:21,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:21,906 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=335d69b6e510f48cce9fd58b561debdb, regionState=CLOSED 2023-07-12 22:18:21,906 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200301906"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200301906"}]},"ts":"1689200301906"} 2023-07-12 22:18:21,909 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-12 22:18:21,909 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; CloseRegionProcedure 335d69b6e510f48cce9fd58b561debdb, server=jenkins-hbase4.apache.org,37441,1689200282765 in 161 msec 2023-07-12 22:18:21,910 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-12 22:18:21,910 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=335d69b6e510f48cce9fd58b561debdb, UNASSIGN in 167 msec 2023-07-12 22:18:21,911 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200301911"}]},"ts":"1689200301911"} 2023-07-12 22:18:21,912 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-12 22:18:21,914 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-12 22:18:21,915 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 179 msec 2023-07-12 22:18:22,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-12 22:18:22,041 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-12 22:18:22,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-12 22:18:22,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 22:18:22,045 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 22:18:22,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_696227763' 2023-07-12 22:18:22,046 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=112, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 22:18:22,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_696227763 2023-07-12 22:18:22,051 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:22,053 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb/recovered.edits] 2023-07-12 22:18:22,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:22,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-12 22:18:22,061 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb/recovered.edits/7.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb/recovered.edits/7.seqid 2023-07-12 22:18:22,062 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveA/335d69b6e510f48cce9fd58b561debdb 2023-07-12 22:18:22,062 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-12 22:18:22,066 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=112, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 22:18:22,068 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-12 22:18:22,070 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-12 22:18:22,072 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=112, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 22:18:22,072 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-12 22:18:22,073 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200302073"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:22,075 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 22:18:22,075 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 335d69b6e510f48cce9fd58b561debdb, NAME => 'GrouptestMultiTableMoveA,,1689200299311.335d69b6e510f48cce9fd58b561debdb.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 22:18:22,075 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-12 22:18:22,075 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689200302075"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:22,078 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-12 22:18:22,086 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=112, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 22:18:22,088 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 44 msec 2023-07-12 22:18:22,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-12 22:18:22,161 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 112 completed 2023-07-12 22:18:22,161 INFO [Listener at localhost/40739] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-12 22:18:22,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-12 22:18:22,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 22:18:22,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-12 22:18:22,166 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200302166"}]},"ts":"1689200302166"} 2023-07-12 22:18:22,167 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-12 22:18:22,170 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-12 22:18:22,171 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=824bf3f18fbe15c1f3bd0996b30df2e7, UNASSIGN}] 2023-07-12 22:18:22,173 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=824bf3f18fbe15c1f3bd0996b30df2e7, UNASSIGN 2023-07-12 22:18:22,175 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=824bf3f18fbe15c1f3bd0996b30df2e7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:22,175 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200302175"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200302175"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200302175"}]},"ts":"1689200302175"} 2023-07-12 22:18:22,178 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure 824bf3f18fbe15c1f3bd0996b30df2e7, server=jenkins-hbase4.apache.org,37441,1689200282765}] 2023-07-12 22:18:22,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-12 22:18:22,330 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:22,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 824bf3f18fbe15c1f3bd0996b30df2e7, disabling compactions & flushes 2023-07-12 22:18:22,331 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:22,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:22,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. after waiting 0 ms 2023-07-12 22:18:22,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:22,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 22:18:22,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7. 2023-07-12 22:18:22,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 824bf3f18fbe15c1f3bd0996b30df2e7: 2023-07-12 22:18:22,338 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:22,339 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=824bf3f18fbe15c1f3bd0996b30df2e7, regionState=CLOSED 2023-07-12 22:18:22,339 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689200302339"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200302339"}]},"ts":"1689200302339"} 2023-07-12 22:18:22,341 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-12 22:18:22,342 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure 824bf3f18fbe15c1f3bd0996b30df2e7, server=jenkins-hbase4.apache.org,37441,1689200282765 in 162 msec 2023-07-12 22:18:22,343 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-12 22:18:22,343 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=824bf3f18fbe15c1f3bd0996b30df2e7, UNASSIGN in 171 msec 2023-07-12 22:18:22,344 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200302343"}]},"ts":"1689200302343"} 2023-07-12 22:18:22,345 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-12 22:18:22,346 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-12 22:18:22,348 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 185 msec 2023-07-12 22:18:22,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-12 22:18:22,468 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-12 22:18:22,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-12 22:18:22,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 22:18:22,472 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 22:18:22,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_696227763' 2023-07-12 22:18:22,472 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=116, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 22:18:22,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_696227763 2023-07-12 22:18:22,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:22,476 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:22,478 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7/recovered.edits] 2023-07-12 22:18:22,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-12 22:18:22,484 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7/recovered.edits/7.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7/recovered.edits/7.seqid 2023-07-12 22:18:22,484 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/GrouptestMultiTableMoveB/824bf3f18fbe15c1f3bd0996b30df2e7 2023-07-12 22:18:22,484 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-12 22:18:22,487 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=116, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 22:18:22,489 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-12 22:18:22,490 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-12 22:18:22,491 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=116, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 22:18:22,491 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-12 22:18:22,491 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200302491"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:22,493 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 22:18:22,493 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 824bf3f18fbe15c1f3bd0996b30df2e7, NAME => 'GrouptestMultiTableMoveB,,1689200299926.824bf3f18fbe15c1f3bd0996b30df2e7.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 22:18:22,493 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-12 22:18:22,493 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689200302493"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:22,494 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-12 22:18:22,496 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=116, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 22:18:22,497 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 27 msec 2023-07-12 22:18:22,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-12 22:18:22,582 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 116 completed 2023-07-12 22:18:22,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:22,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:22,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:22,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37441] to rsgroup default 2023-07-12 22:18:22,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_696227763 2023-07-12 22:18:22,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:22,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_696227763, current retry=0 2023-07-12 22:18:22,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1689200282765] are moved back to Group_testMultiTableMove_696227763 2023-07-12 22:18:22,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_696227763 => default 2023-07-12 22:18:22,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:22,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_696227763 2023-07-12 22:18:22,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 22:18:22,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:22,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:22,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:22,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:22,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:22,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:22,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:22,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:22,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:22,612 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:22,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:22,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:22,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:22,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:22,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:22,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 508 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201502627, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:22,628 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:22,630 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:22,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,632 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:22,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:22,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:22,658 INFO [Listener at localhost/40739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=509 (was 513), OpenFileDescriptor=804 (was 814), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=382 (was 416), ProcessCount=174 (was 176), AvailableMemoryMB=4199 (was 4361) 2023-07-12 22:18:22,658 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-12 22:18:22,679 INFO [Listener at localhost/40739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=509, OpenFileDescriptor=804, MaxFileDescriptor=60000, SystemLoadAverage=382, ProcessCount=174, AvailableMemoryMB=4198 2023-07-12 22:18:22,679 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-12 22:18:22,680 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-12 22:18:22,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:22,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:22,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:22,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:22,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:22,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:22,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:22,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:22,695 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:22,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:22,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:22,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:22,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:22,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:22,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 536 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201502705, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:22,706 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:22,707 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:22,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,708 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:22,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:22,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:22,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:22,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:22,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-12 22:18:22,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 22:18:22,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:22,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:22,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:37441] to rsgroup oldGroup 2023-07-12 22:18:22,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 22:18:22,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:22,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 22:18:22,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1689200282765, jenkins-hbase4.apache.org,41059,1689200282965] are moved back to default 2023-07-12 22:18:22,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-12 22:18:22,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:22,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-12 22:18:22,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:22,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-12 22:18:22,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:22,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:22,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:22,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-12 22:18:22,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 22:18:22,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 22:18:22,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 22:18:22,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:22,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41907] to rsgroup anotherRSGroup 2023-07-12 22:18:22,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 22:18:22,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 22:18:22,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 22:18:22,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 22:18:22,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41907,1689200287570] are moved back to default 2023-07-12 22:18:22,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-12 22:18:22,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:22,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-12 22:18:22,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:22,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-12 22:18:22,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:22,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-12 22:18:22,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:22,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 570 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:45482 deadline: 1689201502761, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-12 22:18:22,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-12 22:18:22,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:22,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:45482 deadline: 1689201502763, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-12 22:18:22,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-12 22:18:22,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:22,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:45482 deadline: 1689201502764, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-12 22:18:22,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-12 22:18:22,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:22,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:45482 deadline: 1689201502765, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-12 22:18:22,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:22,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:22,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:22,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41907] to rsgroup default 2023-07-12 22:18:22,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 22:18:22,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 22:18:22,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 22:18:22,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-12 22:18:22,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41907,1689200287570] are moved back to anotherRSGroup 2023-07-12 22:18:22,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-12 22:18:22,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:22,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-12 22:18:22,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 22:18:22,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 22:18:22,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:22,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:22,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:22,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:22,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:37441] to rsgroup default 2023-07-12 22:18:22,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 22:18:22,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:22,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-12 22:18:22,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1689200282765, jenkins-hbase4.apache.org,41059,1689200282965] are moved back to oldGroup 2023-07-12 22:18:22,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-12 22:18:22,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:22,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-12 22:18:22,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 22:18:22,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:22,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:22,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:22,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:22,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:22,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:22,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:22,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:22,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:22,814 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:22,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:22,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:22,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:22,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:22,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:22,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 612 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201502827, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:22,828 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:22,829 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:22,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,830 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:22,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:22,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:22,849 INFO [Listener at localhost/40739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=513 (was 509) Potentially hanging thread: hconnection-0x724df952-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=804 (was 804), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=382 (was 382), ProcessCount=174 (was 174), AvailableMemoryMB=4200 (was 4198) - AvailableMemoryMB LEAK? - 2023-07-12 22:18:22,849 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-12 22:18:22,867 INFO [Listener at localhost/40739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=513, OpenFileDescriptor=804, MaxFileDescriptor=60000, SystemLoadAverage=382, ProcessCount=174, AvailableMemoryMB=4199 2023-07-12 22:18:22,867 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-12 22:18:22,867 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-12 22:18:22,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:22,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:22,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:22,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:22,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:22,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:22,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:22,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:22,884 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:22,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:22,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:22,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:22,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:22,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:22,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 640 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201502899, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:22,900 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:22,902 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:22,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,903 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:22,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:22,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:22,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:22,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:22,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-12 22:18:22,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 22:18:22,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:22,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:22,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:37441] to rsgroup oldgroup 2023-07-12 22:18:22,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 22:18:22,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:22,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 22:18:22,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1689200282765, jenkins-hbase4.apache.org,41059,1689200282965] are moved back to default 2023-07-12 22:18:22,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-12 22:18:22,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:22,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:22,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:22,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-12 22:18:22,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:22,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:22,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-12 22:18:22,936 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:22,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 117 2023-07-12 22:18:22,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 22:18:22,939 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 22:18:22,940 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:22,940 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:22,941 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:22,943 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:22,944 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:22,945 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433 empty. 2023-07-12 22:18:22,945 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:22,945 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-12 22:18:22,963 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:22,964 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9603db1b4bdf68cb3fd6350c6fcf3433, NAME => 'testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:22,977 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:22,977 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 9603db1b4bdf68cb3fd6350c6fcf3433, disabling compactions & flushes 2023-07-12 22:18:22,977 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:22,977 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:22,977 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. after waiting 0 ms 2023-07-12 22:18:22,977 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:22,977 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:22,977 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 9603db1b4bdf68cb3fd6350c6fcf3433: 2023-07-12 22:18:22,979 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:22,980 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689200302980"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200302980"}]},"ts":"1689200302980"} 2023-07-12 22:18:22,981 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:22,982 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:22,982 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200302982"}]},"ts":"1689200302982"} 2023-07-12 22:18:22,983 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-12 22:18:22,987 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:22,987 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:22,987 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:22,987 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:22,987 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=9603db1b4bdf68cb3fd6350c6fcf3433, ASSIGN}] 2023-07-12 22:18:22,989 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=9603db1b4bdf68cb3fd6350c6fcf3433, ASSIGN 2023-07-12 22:18:22,990 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=9603db1b4bdf68cb3fd6350c6fcf3433, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41907,1689200287570; forceNewPlan=false, retain=false 2023-07-12 22:18:23,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 22:18:23,140 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:23,141 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=9603db1b4bdf68cb3fd6350c6fcf3433, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:23,142 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689200303141"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200303141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200303141"}]},"ts":"1689200303141"} 2023-07-12 22:18:23,143 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure 9603db1b4bdf68cb3fd6350c6fcf3433, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:23,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 22:18:23,299 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:23,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9603db1b4bdf68cb3fd6350c6fcf3433, NAME => 'testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:23,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:23,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:23,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:23,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:23,301 INFO [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:23,302 DEBUG [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433/tr 2023-07-12 22:18:23,302 DEBUG [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433/tr 2023-07-12 22:18:23,303 INFO [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9603db1b4bdf68cb3fd6350c6fcf3433 columnFamilyName tr 2023-07-12 22:18:23,303 INFO [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] regionserver.HStore(310): Store=9603db1b4bdf68cb3fd6350c6fcf3433/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:23,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:23,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:23,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:23,309 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:23,310 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9603db1b4bdf68cb3fd6350c6fcf3433; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10010221440, jitterRate=-0.06772547960281372}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:23,310 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9603db1b4bdf68cb3fd6350c6fcf3433: 2023-07-12 22:18:23,310 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433., pid=119, masterSystemTime=1689200303295 2023-07-12 22:18:23,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:23,312 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:23,312 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=9603db1b4bdf68cb3fd6350c6fcf3433, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:23,312 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689200303312"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200303312"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200303312"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200303312"}]},"ts":"1689200303312"} 2023-07-12 22:18:23,315 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-12 22:18:23,315 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure 9603db1b4bdf68cb3fd6350c6fcf3433, server=jenkins-hbase4.apache.org,41907,1689200287570 in 170 msec 2023-07-12 22:18:23,316 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-12 22:18:23,316 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=9603db1b4bdf68cb3fd6350c6fcf3433, ASSIGN in 328 msec 2023-07-12 22:18:23,317 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:23,317 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200303317"}]},"ts":"1689200303317"} 2023-07-12 22:18:23,318 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-12 22:18:23,322 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:23,323 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=testRename in 389 msec 2023-07-12 22:18:23,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 22:18:23,541 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 117 completed 2023-07-12 22:18:23,541 DEBUG [Listener at localhost/40739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-12 22:18:23,541 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:23,545 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-12 22:18:23,545 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:23,545 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-12 22:18:23,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-12 22:18:23,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 22:18:23,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:23,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:23,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:23,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-12 22:18:23,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(345): Moving region 9603db1b4bdf68cb3fd6350c6fcf3433 to RSGroup oldgroup 2023-07-12 22:18:23,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:23,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:23,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:23,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:23,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:23,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=9603db1b4bdf68cb3fd6350c6fcf3433, REOPEN/MOVE 2023-07-12 22:18:23,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-12 22:18:23,553 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=9603db1b4bdf68cb3fd6350c6fcf3433, REOPEN/MOVE 2023-07-12 22:18:23,554 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=9603db1b4bdf68cb3fd6350c6fcf3433, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:23,554 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689200303554"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200303554"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200303554"}]},"ts":"1689200303554"} 2023-07-12 22:18:23,555 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure 9603db1b4bdf68cb3fd6350c6fcf3433, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:23,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:23,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9603db1b4bdf68cb3fd6350c6fcf3433, disabling compactions & flushes 2023-07-12 22:18:23,710 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:23,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:23,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. after waiting 0 ms 2023-07-12 22:18:23,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:23,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:23,715 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:23,715 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9603db1b4bdf68cb3fd6350c6fcf3433: 2023-07-12 22:18:23,715 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9603db1b4bdf68cb3fd6350c6fcf3433 move to jenkins-hbase4.apache.org,41059,1689200282965 record at close sequenceid=2 2023-07-12 22:18:23,718 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:23,718 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=9603db1b4bdf68cb3fd6350c6fcf3433, regionState=CLOSED 2023-07-12 22:18:23,719 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689200303718"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200303718"}]},"ts":"1689200303718"} 2023-07-12 22:18:23,723 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-12 22:18:23,723 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure 9603db1b4bdf68cb3fd6350c6fcf3433, server=jenkins-hbase4.apache.org,41907,1689200287570 in 166 msec 2023-07-12 22:18:23,723 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=9603db1b4bdf68cb3fd6350c6fcf3433, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41059,1689200282965; forceNewPlan=false, retain=false 2023-07-12 22:18:23,874 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:23,874 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=9603db1b4bdf68cb3fd6350c6fcf3433, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:23,874 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689200303874"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200303874"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200303874"}]},"ts":"1689200303874"} 2023-07-12 22:18:23,876 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure 9603db1b4bdf68cb3fd6350c6fcf3433, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:24,032 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:24,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9603db1b4bdf68cb3fd6350c6fcf3433, NAME => 'testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:24,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:24,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:24,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:24,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:24,034 INFO [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:24,035 DEBUG [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433/tr 2023-07-12 22:18:24,035 DEBUG [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433/tr 2023-07-12 22:18:24,035 INFO [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9603db1b4bdf68cb3fd6350c6fcf3433 columnFamilyName tr 2023-07-12 22:18:24,036 INFO [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] regionserver.HStore(310): Store=9603db1b4bdf68cb3fd6350c6fcf3433/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:24,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:24,037 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:24,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:24,041 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9603db1b4bdf68cb3fd6350c6fcf3433; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11348873440, jitterRate=0.056946203112602234}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:24,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9603db1b4bdf68cb3fd6350c6fcf3433: 2023-07-12 22:18:24,042 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433., pid=122, masterSystemTime=1689200304028 2023-07-12 22:18:24,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:24,043 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:24,043 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=9603db1b4bdf68cb3fd6350c6fcf3433, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:24,043 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689200304043"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200304043"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200304043"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200304043"}]},"ts":"1689200304043"} 2023-07-12 22:18:24,046 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-12 22:18:24,046 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure 9603db1b4bdf68cb3fd6350c6fcf3433, server=jenkins-hbase4.apache.org,41059,1689200282965 in 169 msec 2023-07-12 22:18:24,047 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=9603db1b4bdf68cb3fd6350c6fcf3433, REOPEN/MOVE in 494 msec 2023-07-12 22:18:24,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-12 22:18:24,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-12 22:18:24,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:24,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:24,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:24,559 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:24,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-12 22:18:24,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:24,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-12 22:18:24,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:24,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-12 22:18:24,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:24,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:24,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:24,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-12 22:18:24,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 22:18:24,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 22:18:24,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:24,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:24,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 22:18:24,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:24,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:24,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:24,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41907] to rsgroup normal 2023-07-12 22:18:24,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 22:18:24,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 22:18:24,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:24,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:24,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 22:18:24,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 22:18:24,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41907,1689200287570] are moved back to default 2023-07-12 22:18:24,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-12 22:18:24,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:24,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:24,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:24,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-12 22:18:24,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:24,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:24,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-12 22:18:24,592 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:24,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 123 2023-07-12 22:18:24,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-12 22:18:24,594 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 22:18:24,594 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 22:18:24,595 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:24,595 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:24,595 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 22:18:24,597 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:24,599 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:24,599 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c empty. 2023-07-12 22:18:24,600 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:24,600 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-12 22:18:24,614 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:24,615 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => e434966f28850b76223c1f3ef1ceaf0c, NAME => 'unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:24,626 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:24,626 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing e434966f28850b76223c1f3ef1ceaf0c, disabling compactions & flushes 2023-07-12 22:18:24,626 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:24,626 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:24,626 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. after waiting 0 ms 2023-07-12 22:18:24,626 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:24,626 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:24,627 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for e434966f28850b76223c1f3ef1ceaf0c: 2023-07-12 22:18:24,629 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:24,630 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689200304629"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200304629"}]},"ts":"1689200304629"} 2023-07-12 22:18:24,631 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:24,632 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:24,632 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200304632"}]},"ts":"1689200304632"} 2023-07-12 22:18:24,633 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-12 22:18:24,636 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=e434966f28850b76223c1f3ef1ceaf0c, ASSIGN}] 2023-07-12 22:18:24,638 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=e434966f28850b76223c1f3ef1ceaf0c, ASSIGN 2023-07-12 22:18:24,639 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=e434966f28850b76223c1f3ef1ceaf0c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44439,1689200283155; forceNewPlan=false, retain=false 2023-07-12 22:18:24,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-12 22:18:24,790 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=e434966f28850b76223c1f3ef1ceaf0c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:24,790 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689200304790"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200304790"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200304790"}]},"ts":"1689200304790"} 2023-07-12 22:18:24,792 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=124, state=RUNNABLE; OpenRegionProcedure e434966f28850b76223c1f3ef1ceaf0c, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:24,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-12 22:18:24,948 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:24,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e434966f28850b76223c1f3ef1ceaf0c, NAME => 'unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:24,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:24,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:24,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:24,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:24,950 INFO [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:24,952 DEBUG [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c/ut 2023-07-12 22:18:24,952 DEBUG [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c/ut 2023-07-12 22:18:24,953 INFO [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e434966f28850b76223c1f3ef1ceaf0c columnFamilyName ut 2023-07-12 22:18:24,953 INFO [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] regionserver.HStore(310): Store=e434966f28850b76223c1f3ef1ceaf0c/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:24,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:24,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:24,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:24,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:24,961 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e434966f28850b76223c1f3ef1ceaf0c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9895809760, jitterRate=-0.07838089764118195}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:24,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e434966f28850b76223c1f3ef1ceaf0c: 2023-07-12 22:18:24,962 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c., pid=125, masterSystemTime=1689200304943 2023-07-12 22:18:24,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:24,964 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:24,966 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=e434966f28850b76223c1f3ef1ceaf0c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:24,966 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689200304965"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200304965"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200304965"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200304965"}]},"ts":"1689200304965"} 2023-07-12 22:18:24,969 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=124 2023-07-12 22:18:24,969 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=124, state=SUCCESS; OpenRegionProcedure e434966f28850b76223c1f3ef1ceaf0c, server=jenkins-hbase4.apache.org,44439,1689200283155 in 175 msec 2023-07-12 22:18:24,970 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-12 22:18:24,971 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=e434966f28850b76223c1f3ef1ceaf0c, ASSIGN in 333 msec 2023-07-12 22:18:24,971 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:24,971 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200304971"}]},"ts":"1689200304971"} 2023-07-12 22:18:24,973 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-12 22:18:24,976 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:24,977 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; CreateTableProcedure table=unmovedTable in 387 msec 2023-07-12 22:18:25,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-12 22:18:25,197 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 123 completed 2023-07-12 22:18:25,197 DEBUG [Listener at localhost/40739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-12 22:18:25,197 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:25,202 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-12 22:18:25,202 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:25,202 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-12 22:18:25,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-12 22:18:25,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 22:18:25,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 22:18:25,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:25,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:25,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 22:18:25,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-12 22:18:25,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(345): Moving region e434966f28850b76223c1f3ef1ceaf0c to RSGroup normal 2023-07-12 22:18:25,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=e434966f28850b76223c1f3ef1ceaf0c, REOPEN/MOVE 2023-07-12 22:18:25,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-12 22:18:25,211 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=e434966f28850b76223c1f3ef1ceaf0c, REOPEN/MOVE 2023-07-12 22:18:25,211 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=e434966f28850b76223c1f3ef1ceaf0c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:25,211 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689200305211"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200305211"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200305211"}]},"ts":"1689200305211"} 2023-07-12 22:18:25,213 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure e434966f28850b76223c1f3ef1ceaf0c, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:25,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:25,366 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e434966f28850b76223c1f3ef1ceaf0c, disabling compactions & flushes 2023-07-12 22:18:25,366 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:25,366 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:25,366 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. after waiting 0 ms 2023-07-12 22:18:25,366 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:25,370 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:25,371 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:25,371 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e434966f28850b76223c1f3ef1ceaf0c: 2023-07-12 22:18:25,371 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e434966f28850b76223c1f3ef1ceaf0c move to jenkins-hbase4.apache.org,41907,1689200287570 record at close sequenceid=2 2023-07-12 22:18:25,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:25,374 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=e434966f28850b76223c1f3ef1ceaf0c, regionState=CLOSED 2023-07-12 22:18:25,374 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689200305374"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200305374"}]},"ts":"1689200305374"} 2023-07-12 22:18:25,377 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-12 22:18:25,377 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure e434966f28850b76223c1f3ef1ceaf0c, server=jenkins-hbase4.apache.org,44439,1689200283155 in 162 msec 2023-07-12 22:18:25,377 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=e434966f28850b76223c1f3ef1ceaf0c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41907,1689200287570; forceNewPlan=false, retain=false 2023-07-12 22:18:25,528 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=e434966f28850b76223c1f3ef1ceaf0c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:25,528 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689200305528"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200305528"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200305528"}]},"ts":"1689200305528"} 2023-07-12 22:18:25,532 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure e434966f28850b76223c1f3ef1ceaf0c, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:25,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:25,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e434966f28850b76223c1f3ef1ceaf0c, NAME => 'unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:25,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:25,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:25,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:25,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:25,704 INFO [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:25,705 DEBUG [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c/ut 2023-07-12 22:18:25,705 DEBUG [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c/ut 2023-07-12 22:18:25,706 INFO [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e434966f28850b76223c1f3ef1ceaf0c columnFamilyName ut 2023-07-12 22:18:25,706 INFO [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] regionserver.HStore(310): Store=e434966f28850b76223c1f3ef1ceaf0c/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:25,707 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:25,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:25,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:25,713 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e434966f28850b76223c1f3ef1ceaf0c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10042949120, jitterRate=-0.06467747688293457}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:25,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e434966f28850b76223c1f3ef1ceaf0c: 2023-07-12 22:18:25,714 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c., pid=128, masterSystemTime=1689200305684 2023-07-12 22:18:25,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:25,715 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:25,716 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=e434966f28850b76223c1f3ef1ceaf0c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:25,716 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689200305715"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200305715"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200305715"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200305715"}]},"ts":"1689200305715"} 2023-07-12 22:18:25,719 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-12 22:18:25,720 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure e434966f28850b76223c1f3ef1ceaf0c, server=jenkins-hbase4.apache.org,41907,1689200287570 in 186 msec 2023-07-12 22:18:25,722 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=e434966f28850b76223c1f3ef1ceaf0c, REOPEN/MOVE in 510 msec 2023-07-12 22:18:25,806 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-12 22:18:26,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-12 22:18:26,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-12 22:18:26,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:26,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:26,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:26,219 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:26,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 22:18:26,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:26,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-12 22:18:26,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:26,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 22:18:26,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:26,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-12 22:18:26,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 22:18:26,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:26,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:26,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 22:18:26,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-12 22:18:26,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-12 22:18:26,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:26,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:26,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-12 22:18:26,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:26,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-12 22:18:26,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:26,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 22:18:26,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:26,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:26,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:26,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-12 22:18:26,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 22:18:26,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:26,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:26,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 22:18:26,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 22:18:26,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-12 22:18:26,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(345): Moving region e434966f28850b76223c1f3ef1ceaf0c to RSGroup default 2023-07-12 22:18:26,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=e434966f28850b76223c1f3ef1ceaf0c, REOPEN/MOVE 2023-07-12 22:18:26,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 22:18:26,277 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=e434966f28850b76223c1f3ef1ceaf0c, REOPEN/MOVE 2023-07-12 22:18:26,277 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=e434966f28850b76223c1f3ef1ceaf0c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:26,277 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689200306277"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200306277"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200306277"}]},"ts":"1689200306277"} 2023-07-12 22:18:26,279 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure e434966f28850b76223c1f3ef1ceaf0c, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:26,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:26,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e434966f28850b76223c1f3ef1ceaf0c, disabling compactions & flushes 2023-07-12 22:18:26,434 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:26,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:26,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. after waiting 0 ms 2023-07-12 22:18:26,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:26,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 22:18:26,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:26,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e434966f28850b76223c1f3ef1ceaf0c: 2023-07-12 22:18:26,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e434966f28850b76223c1f3ef1ceaf0c move to jenkins-hbase4.apache.org,44439,1689200283155 record at close sequenceid=5 2023-07-12 22:18:26,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:26,442 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=e434966f28850b76223c1f3ef1ceaf0c, regionState=CLOSED 2023-07-12 22:18:26,442 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689200306442"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200306442"}]},"ts":"1689200306442"} 2023-07-12 22:18:26,447 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-12 22:18:26,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure e434966f28850b76223c1f3ef1ceaf0c, server=jenkins-hbase4.apache.org,41907,1689200287570 in 164 msec 2023-07-12 22:18:26,448 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=e434966f28850b76223c1f3ef1ceaf0c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44439,1689200283155; forceNewPlan=false, retain=false 2023-07-12 22:18:26,599 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=e434966f28850b76223c1f3ef1ceaf0c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:26,599 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689200306599"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200306599"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200306599"}]},"ts":"1689200306599"} 2023-07-12 22:18:26,601 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure e434966f28850b76223c1f3ef1ceaf0c, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:26,749 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 22:18:26,760 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:26,760 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e434966f28850b76223c1f3ef1ceaf0c, NAME => 'unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:26,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:26,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:26,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:26,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:26,765 INFO [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:26,767 DEBUG [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c/ut 2023-07-12 22:18:26,768 DEBUG [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c/ut 2023-07-12 22:18:26,768 INFO [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e434966f28850b76223c1f3ef1ceaf0c columnFamilyName ut 2023-07-12 22:18:26,769 INFO [StoreOpener-e434966f28850b76223c1f3ef1ceaf0c-1] regionserver.HStore(310): Store=e434966f28850b76223c1f3ef1ceaf0c/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:26,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:26,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:26,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:26,775 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e434966f28850b76223c1f3ef1ceaf0c; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9876620320, jitterRate=-0.08016805350780487}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:26,776 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e434966f28850b76223c1f3ef1ceaf0c: 2023-07-12 22:18:26,776 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c., pid=131, masterSystemTime=1689200306752 2023-07-12 22:18:26,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:26,778 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:26,778 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=e434966f28850b76223c1f3ef1ceaf0c, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:26,779 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689200306778"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200306778"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200306778"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200306778"}]},"ts":"1689200306778"} 2023-07-12 22:18:26,787 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-12 22:18:26,787 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure e434966f28850b76223c1f3ef1ceaf0c, server=jenkins-hbase4.apache.org,44439,1689200283155 in 183 msec 2023-07-12 22:18:26,795 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=e434966f28850b76223c1f3ef1ceaf0c, REOPEN/MOVE in 511 msec 2023-07-12 22:18:27,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-12 22:18:27,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-12 22:18:27,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:27,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41907] to rsgroup default 2023-07-12 22:18:27,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 22:18:27,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:27,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:27,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 22:18:27,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 22:18:27,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-12 22:18:27,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41907,1689200287570] are moved back to normal 2023-07-12 22:18:27,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-12 22:18:27,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:27,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-12 22:18:27,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:27,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:27,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 22:18:27,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 22:18:27,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:27,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:27,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:27,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:27,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:27,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:27,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:27,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:27,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 22:18:27,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 22:18:27,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:27,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-12 22:18:27,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:27,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 22:18:27,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:27,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-12 22:18:27,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(345): Moving region 9603db1b4bdf68cb3fd6350c6fcf3433 to RSGroup default 2023-07-12 22:18:27,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=9603db1b4bdf68cb3fd6350c6fcf3433, REOPEN/MOVE 2023-07-12 22:18:27,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 22:18:27,306 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=9603db1b4bdf68cb3fd6350c6fcf3433, REOPEN/MOVE 2023-07-12 22:18:27,307 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=9603db1b4bdf68cb3fd6350c6fcf3433, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:27,307 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689200307307"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200307307"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200307307"}]},"ts":"1689200307307"} 2023-07-12 22:18:27,308 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; CloseRegionProcedure 9603db1b4bdf68cb3fd6350c6fcf3433, server=jenkins-hbase4.apache.org,41059,1689200282965}] 2023-07-12 22:18:27,461 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:27,463 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9603db1b4bdf68cb3fd6350c6fcf3433, disabling compactions & flushes 2023-07-12 22:18:27,463 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:27,463 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:27,463 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. after waiting 0 ms 2023-07-12 22:18:27,463 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:27,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 22:18:27,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:27,469 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9603db1b4bdf68cb3fd6350c6fcf3433: 2023-07-12 22:18:27,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9603db1b4bdf68cb3fd6350c6fcf3433 move to jenkins-hbase4.apache.org,41907,1689200287570 record at close sequenceid=5 2023-07-12 22:18:27,470 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:27,471 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=9603db1b4bdf68cb3fd6350c6fcf3433, regionState=CLOSED 2023-07-12 22:18:27,471 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689200307471"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200307471"}]},"ts":"1689200307471"} 2023-07-12 22:18:27,474 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-12 22:18:27,474 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; CloseRegionProcedure 9603db1b4bdf68cb3fd6350c6fcf3433, server=jenkins-hbase4.apache.org,41059,1689200282965 in 164 msec 2023-07-12 22:18:27,474 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=9603db1b4bdf68cb3fd6350c6fcf3433, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41907,1689200287570; forceNewPlan=false, retain=false 2023-07-12 22:18:27,625 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:27,625 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=9603db1b4bdf68cb3fd6350c6fcf3433, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:27,625 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689200307625"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200307625"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200307625"}]},"ts":"1689200307625"} 2023-07-12 22:18:27,627 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE; OpenRegionProcedure 9603db1b4bdf68cb3fd6350c6fcf3433, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:27,783 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:27,783 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9603db1b4bdf68cb3fd6350c6fcf3433, NAME => 'testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:27,783 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:27,783 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:27,783 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:27,783 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:27,785 INFO [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:27,786 DEBUG [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433/tr 2023-07-12 22:18:27,786 DEBUG [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433/tr 2023-07-12 22:18:27,786 INFO [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9603db1b4bdf68cb3fd6350c6fcf3433 columnFamilyName tr 2023-07-12 22:18:27,787 INFO [StoreOpener-9603db1b4bdf68cb3fd6350c6fcf3433-1] regionserver.HStore(310): Store=9603db1b4bdf68cb3fd6350c6fcf3433/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:27,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:27,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:27,792 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:27,793 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9603db1b4bdf68cb3fd6350c6fcf3433; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10644313440, jitterRate=-0.00867106020450592}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:27,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9603db1b4bdf68cb3fd6350c6fcf3433: 2023-07-12 22:18:27,794 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433., pid=134, masterSystemTime=1689200307778 2023-07-12 22:18:27,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:27,795 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:27,796 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=9603db1b4bdf68cb3fd6350c6fcf3433, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:27,796 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689200307795"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200307795"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200307795"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200307795"}]},"ts":"1689200307795"} 2023-07-12 22:18:27,798 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-12 22:18:27,799 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; OpenRegionProcedure 9603db1b4bdf68cb3fd6350c6fcf3433, server=jenkins-hbase4.apache.org,41907,1689200287570 in 170 msec 2023-07-12 22:18:27,800 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=9603db1b4bdf68cb3fd6350c6fcf3433, REOPEN/MOVE in 494 msec 2023-07-12 22:18:28,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure.ProcedureSyncWait(216): waitFor pid=132 2023-07-12 22:18:28,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-12 22:18:28,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:28,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:37441] to rsgroup default 2023-07-12 22:18:28,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:28,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 22:18:28,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:28,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-12 22:18:28,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1689200282765, jenkins-hbase4.apache.org,41059,1689200282965] are moved back to newgroup 2023-07-12 22:18:28,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-12 22:18:28,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:28,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-12 22:18:28,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:28,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:28,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:28,323 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:28,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:28,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:28,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:28,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:28,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:28,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:28,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:28,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:28,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:28,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 760 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201508338, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:28,339 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:28,341 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:28,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:28,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:28,342 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:28,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:28,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:28,361 INFO [Listener at localhost/40739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=509 (was 513), OpenFileDescriptor=776 (was 804), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=376 (was 382), ProcessCount=175 (was 174) - ProcessCount LEAK? -, AvailableMemoryMB=6594 (was 4199) - AvailableMemoryMB LEAK? - 2023-07-12 22:18:28,361 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-12 22:18:28,378 INFO [Listener at localhost/40739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=509, OpenFileDescriptor=776, MaxFileDescriptor=60000, SystemLoadAverage=376, ProcessCount=175, AvailableMemoryMB=6592 2023-07-12 22:18:28,378 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-12 22:18:28,378 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-12 22:18:28,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:28,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:28,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:28,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:28,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:28,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:28,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:28,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:28,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:28,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:28,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:28,392 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:28,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:28,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:28,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:28,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:28,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:28,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:28,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:28,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:28,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:28,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 788 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201508402, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:28,403 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:28,405 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:28,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:28,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:28,406 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:28,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:28,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:28,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-12 22:18:28,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:28,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-12 22:18:28,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-12 22:18:28,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-12 22:18:28,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:28,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-12 22:18:28,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:28,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 800 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:45482 deadline: 1689201508415, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-12 22:18:28,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-12 22:18:28,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:28,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:45482 deadline: 1689201508417, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-12 22:18:28,420 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-12 22:18:28,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-12 22:18:28,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-12 22:18:28,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:28,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:45482 deadline: 1689201508424, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-12 22:18:28,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:28,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:28,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:28,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:28,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:28,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:28,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:28,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:28,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:28,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:28,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:28,438 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:28,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:28,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:28,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:28,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:28,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:28,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:28,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:28,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:28,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:28,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 831 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201508448, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:28,451 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:28,453 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:28,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:28,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:28,454 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:28,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:28,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:28,472 INFO [Listener at localhost/40739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=513 (was 509) Potentially hanging thread: hconnection-0x32267dc-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x32267dc-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=776 (was 776), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=376 (was 376), ProcessCount=175 (was 175), AvailableMemoryMB=6588 (was 6592) 2023-07-12 22:18:28,472 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-12 22:18:28,489 INFO [Listener at localhost/40739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=513, OpenFileDescriptor=776, MaxFileDescriptor=60000, SystemLoadAverage=376, ProcessCount=173, AvailableMemoryMB=6608 2023-07-12 22:18:28,489 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-12 22:18:28,489 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-12 22:18:28,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:28,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:28,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:28,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:28,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:28,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:28,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:28,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:28,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:28,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:28,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:28,503 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:28,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:28,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:28,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:28,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:28,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:28,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:28,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:28,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:28,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:28,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 859 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201508512, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:28,513 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:28,515 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:28,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:28,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:28,516 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:28,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:28,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:28,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:28,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:28,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_2132534156 2023-07-12 22:18:28,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:28,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2132534156 2023-07-12 22:18:28,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:28,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:28,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:28,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:28,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:28,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:37441] to rsgroup Group_testDisabledTableMove_2132534156 2023-07-12 22:18:28,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:28,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2132534156 2023-07-12 22:18:28,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:28,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:28,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 22:18:28,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1689200282765, jenkins-hbase4.apache.org,41059,1689200282965] are moved back to default 2023-07-12 22:18:28,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_2132534156 2023-07-12 22:18:28,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:28,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:28,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:28,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_2132534156 2023-07-12 22:18:28,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:28,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:28,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-12 22:18:28,549 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:28,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 135 2023-07-12 22:18:28,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-12 22:18:28,551 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:28,551 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2132534156 2023-07-12 22:18:28,551 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:28,552 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:28,554 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:28,557 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c 2023-07-12 22:18:28,557 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d 2023-07-12 22:18:28,557 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34 2023-07-12 22:18:28,557 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11 2023-07-12 22:18:28,557 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164 2023-07-12 22:18:28,558 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d empty. 2023-07-12 22:18:28,558 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34 empty. 2023-07-12 22:18:28,558 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c empty. 2023-07-12 22:18:28,558 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11 empty. 2023-07-12 22:18:28,559 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164 empty. 2023-07-12 22:18:28,559 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d 2023-07-12 22:18:28,559 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34 2023-07-12 22:18:28,559 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c 2023-07-12 22:18:28,559 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11 2023-07-12 22:18:28,559 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164 2023-07-12 22:18:28,559 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-12 22:18:28,576 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:28,578 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 3257e025ab611863497f60b324253164, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:28,578 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 5f9b5450444fc4244814fba2a3ab362d, NAME => 'Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:28,578 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8eef4b75514efb5cb9ffa8e905feec1c, NAME => 'Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:28,607 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:28,607 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 5f9b5450444fc4244814fba2a3ab362d, disabling compactions & flushes 2023-07-12 22:18:28,607 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d. 2023-07-12 22:18:28,607 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d. 2023-07-12 22:18:28,607 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d. after waiting 0 ms 2023-07-12 22:18:28,607 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d. 2023-07-12 22:18:28,607 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d. 2023-07-12 22:18:28,607 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 5f9b5450444fc4244814fba2a3ab362d: 2023-07-12 22:18:28,608 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 4b636aa986363fb4671dd1c67c493f11, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:28,608 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:28,608 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 3257e025ab611863497f60b324253164, disabling compactions & flushes 2023-07-12 22:18:28,608 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164. 2023-07-12 22:18:28,608 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164. 2023-07-12 22:18:28,608 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164. after waiting 0 ms 2023-07-12 22:18:28,608 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164. 2023-07-12 22:18:28,608 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164. 2023-07-12 22:18:28,608 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 3257e025ab611863497f60b324253164: 2023-07-12 22:18:28,609 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 29bbd03e9a5f3334883a2b7fdcacae34, NAME => 'Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp 2023-07-12 22:18:28,610 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:28,610 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 8eef4b75514efb5cb9ffa8e905feec1c, disabling compactions & flushes 2023-07-12 22:18:28,610 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c. 2023-07-12 22:18:28,610 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c. 2023-07-12 22:18:28,610 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c. after waiting 0 ms 2023-07-12 22:18:28,610 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c. 2023-07-12 22:18:28,610 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c. 2023-07-12 22:18:28,610 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 8eef4b75514efb5cb9ffa8e905feec1c: 2023-07-12 22:18:28,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:28,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 4b636aa986363fb4671dd1c67c493f11, disabling compactions & flushes 2023-07-12 22:18:28,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:28,624 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11. 2023-07-12 22:18:28,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 29bbd03e9a5f3334883a2b7fdcacae34, disabling compactions & flushes 2023-07-12 22:18:28,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11. 2023-07-12 22:18:28,624 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34. 2023-07-12 22:18:28,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34. 2023-07-12 22:18:28,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11. after waiting 0 ms 2023-07-12 22:18:28,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11. 2023-07-12 22:18:28,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34. after waiting 0 ms 2023-07-12 22:18:28,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34. 2023-07-12 22:18:28,624 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11. 2023-07-12 22:18:28,624 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34. 2023-07-12 22:18:28,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 4b636aa986363fb4671dd1c67c493f11: 2023-07-12 22:18:28,624 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 29bbd03e9a5f3334883a2b7fdcacae34: 2023-07-12 22:18:28,626 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:28,627 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200308627"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200308627"}]},"ts":"1689200308627"} 2023-07-12 22:18:28,627 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689200308546.3257e025ab611863497f60b324253164.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200308627"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200308627"}]},"ts":"1689200308627"} 2023-07-12 22:18:28,627 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689200308627"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200308627"}]},"ts":"1689200308627"} 2023-07-12 22:18:28,627 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200308627"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200308627"}]},"ts":"1689200308627"} 2023-07-12 22:18:28,628 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689200308627"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200308627"}]},"ts":"1689200308627"} 2023-07-12 22:18:28,630 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 22:18:28,630 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:28,631 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200308631"}]},"ts":"1689200308631"} 2023-07-12 22:18:28,632 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-12 22:18:28,636 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:28,637 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:28,637 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:28,637 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:28,637 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8eef4b75514efb5cb9ffa8e905feec1c, ASSIGN}, {pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5f9b5450444fc4244814fba2a3ab362d, ASSIGN}, {pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3257e025ab611863497f60b324253164, ASSIGN}, {pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4b636aa986363fb4671dd1c67c493f11, ASSIGN}, {pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29bbd03e9a5f3334883a2b7fdcacae34, ASSIGN}] 2023-07-12 22:18:28,639 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3257e025ab611863497f60b324253164, ASSIGN 2023-07-12 22:18:28,639 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5f9b5450444fc4244814fba2a3ab362d, ASSIGN 2023-07-12 22:18:28,640 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8eef4b75514efb5cb9ffa8e905feec1c, ASSIGN 2023-07-12 22:18:28,640 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29bbd03e9a5f3334883a2b7fdcacae34, ASSIGN 2023-07-12 22:18:28,640 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3257e025ab611863497f60b324253164, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44439,1689200283155; forceNewPlan=false, retain=false 2023-07-12 22:18:28,641 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4b636aa986363fb4671dd1c67c493f11, ASSIGN 2023-07-12 22:18:28,640 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5f9b5450444fc4244814fba2a3ab362d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41907,1689200287570; forceNewPlan=false, retain=false 2023-07-12 22:18:28,641 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8eef4b75514efb5cb9ffa8e905feec1c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41907,1689200287570; forceNewPlan=false, retain=false 2023-07-12 22:18:28,641 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29bbd03e9a5f3334883a2b7fdcacae34, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41907,1689200287570; forceNewPlan=false, retain=false 2023-07-12 22:18:28,642 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4b636aa986363fb4671dd1c67c493f11, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44439,1689200283155; forceNewPlan=false, retain=false 2023-07-12 22:18:28,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-12 22:18:28,791 INFO [jenkins-hbase4:34283] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 22:18:28,794 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=4b636aa986363fb4671dd1c67c493f11, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:28,794 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=8eef4b75514efb5cb9ffa8e905feec1c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:28,794 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=29bbd03e9a5f3334883a2b7fdcacae34, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:28,794 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=3257e025ab611863497f60b324253164, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:28,794 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=5f9b5450444fc4244814fba2a3ab362d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:28,795 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689200308794"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200308794"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200308794"}]},"ts":"1689200308794"} 2023-07-12 22:18:28,795 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200308794"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200308794"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200308794"}]},"ts":"1689200308794"} 2023-07-12 22:18:28,795 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689200308794"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200308794"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200308794"}]},"ts":"1689200308794"} 2023-07-12 22:18:28,795 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200308794"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200308794"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200308794"}]},"ts":"1689200308794"} 2023-07-12 22:18:28,795 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689200308546.3257e025ab611863497f60b324253164.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200308794"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200308794"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200308794"}]},"ts":"1689200308794"} 2023-07-12 22:18:28,797 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=140, state=RUNNABLE; OpenRegionProcedure 29bbd03e9a5f3334883a2b7fdcacae34, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:28,797 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=137, state=RUNNABLE; OpenRegionProcedure 5f9b5450444fc4244814fba2a3ab362d, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:28,798 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=136, state=RUNNABLE; OpenRegionProcedure 8eef4b75514efb5cb9ffa8e905feec1c, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:28,799 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=139, state=RUNNABLE; OpenRegionProcedure 4b636aa986363fb4671dd1c67c493f11, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:28,803 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=138, state=RUNNABLE; OpenRegionProcedure 3257e025ab611863497f60b324253164, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:28,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-12 22:18:28,952 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34. 2023-07-12 22:18:28,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 29bbd03e9a5f3334883a2b7fdcacae34, NAME => 'Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 22:18:28,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 29bbd03e9a5f3334883a2b7fdcacae34 2023-07-12 22:18:28,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:28,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 29bbd03e9a5f3334883a2b7fdcacae34 2023-07-12 22:18:28,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 29bbd03e9a5f3334883a2b7fdcacae34 2023-07-12 22:18:28,954 INFO [StoreOpener-29bbd03e9a5f3334883a2b7fdcacae34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 29bbd03e9a5f3334883a2b7fdcacae34 2023-07-12 22:18:28,956 DEBUG [StoreOpener-29bbd03e9a5f3334883a2b7fdcacae34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34/f 2023-07-12 22:18:28,956 DEBUG [StoreOpener-29bbd03e9a5f3334883a2b7fdcacae34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34/f 2023-07-12 22:18:28,956 INFO [StoreOpener-29bbd03e9a5f3334883a2b7fdcacae34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 29bbd03e9a5f3334883a2b7fdcacae34 columnFamilyName f 2023-07-12 22:18:28,957 INFO [StoreOpener-29bbd03e9a5f3334883a2b7fdcacae34-1] regionserver.HStore(310): Store=29bbd03e9a5f3334883a2b7fdcacae34/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:28,958 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11. 2023-07-12 22:18:28,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4b636aa986363fb4671dd1c67c493f11, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 22:18:28,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 4b636aa986363fb4671dd1c67c493f11 2023-07-12 22:18:28,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34 2023-07-12 22:18:28,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:28,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4b636aa986363fb4671dd1c67c493f11 2023-07-12 22:18:28,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4b636aa986363fb4671dd1c67c493f11 2023-07-12 22:18:28,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34 2023-07-12 22:18:28,959 INFO [StoreOpener-4b636aa986363fb4671dd1c67c493f11-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4b636aa986363fb4671dd1c67c493f11 2023-07-12 22:18:28,961 DEBUG [StoreOpener-4b636aa986363fb4671dd1c67c493f11-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11/f 2023-07-12 22:18:28,961 DEBUG [StoreOpener-4b636aa986363fb4671dd1c67c493f11-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11/f 2023-07-12 22:18:28,962 INFO [StoreOpener-4b636aa986363fb4671dd1c67c493f11-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4b636aa986363fb4671dd1c67c493f11 columnFamilyName f 2023-07-12 22:18:28,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 29bbd03e9a5f3334883a2b7fdcacae34 2023-07-12 22:18:28,963 INFO [StoreOpener-4b636aa986363fb4671dd1c67c493f11-1] regionserver.HStore(310): Store=4b636aa986363fb4671dd1c67c493f11/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:28,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11 2023-07-12 22:18:28,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11 2023-07-12 22:18:28,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:28,967 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 29bbd03e9a5f3334883a2b7fdcacae34; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9479720160, jitterRate=-0.1171322613954544}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:28,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 29bbd03e9a5f3334883a2b7fdcacae34: 2023-07-12 22:18:28,968 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34., pid=141, masterSystemTime=1689200308948 2023-07-12 22:18:28,969 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4b636aa986363fb4671dd1c67c493f11 2023-07-12 22:18:28,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34. 2023-07-12 22:18:28,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34. 2023-07-12 22:18:28,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c. 2023-07-12 22:18:28,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8eef4b75514efb5cb9ffa8e905feec1c, NAME => 'Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 22:18:28,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 8eef4b75514efb5cb9ffa8e905feec1c 2023-07-12 22:18:28,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:28,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8eef4b75514efb5cb9ffa8e905feec1c 2023-07-12 22:18:28,971 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=29bbd03e9a5f3334883a2b7fdcacae34, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:28,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8eef4b75514efb5cb9ffa8e905feec1c 2023-07-12 22:18:28,971 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689200308971"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200308971"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200308971"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200308971"}]},"ts":"1689200308971"} 2023-07-12 22:18:28,976 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=140 2023-07-12 22:18:28,976 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=140, state=SUCCESS; OpenRegionProcedure 29bbd03e9a5f3334883a2b7fdcacae34, server=jenkins-hbase4.apache.org,41907,1689200287570 in 177 msec 2023-07-12 22:18:28,977 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:28,977 INFO [StoreOpener-8eef4b75514efb5cb9ffa8e905feec1c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8eef4b75514efb5cb9ffa8e905feec1c 2023-07-12 22:18:28,977 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29bbd03e9a5f3334883a2b7fdcacae34, ASSIGN in 339 msec 2023-07-12 22:18:28,977 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4b636aa986363fb4671dd1c67c493f11; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11463040640, jitterRate=0.06757885217666626}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:28,977 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4b636aa986363fb4671dd1c67c493f11: 2023-07-12 22:18:28,978 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11., pid=144, masterSystemTime=1689200308954 2023-07-12 22:18:28,979 DEBUG [StoreOpener-8eef4b75514efb5cb9ffa8e905feec1c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c/f 2023-07-12 22:18:28,979 DEBUG [StoreOpener-8eef4b75514efb5cb9ffa8e905feec1c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c/f 2023-07-12 22:18:28,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11. 2023-07-12 22:18:28,980 INFO [StoreOpener-8eef4b75514efb5cb9ffa8e905feec1c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8eef4b75514efb5cb9ffa8e905feec1c columnFamilyName f 2023-07-12 22:18:28,980 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11. 2023-07-12 22:18:28,980 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164. 2023-07-12 22:18:28,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3257e025ab611863497f60b324253164, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 22:18:28,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 3257e025ab611863497f60b324253164 2023-07-12 22:18:28,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:28,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3257e025ab611863497f60b324253164 2023-07-12 22:18:28,981 INFO [StoreOpener-8eef4b75514efb5cb9ffa8e905feec1c-1] regionserver.HStore(310): Store=8eef4b75514efb5cb9ffa8e905feec1c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:28,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3257e025ab611863497f60b324253164 2023-07-12 22:18:28,981 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=4b636aa986363fb4671dd1c67c493f11, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:28,982 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200308981"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200308981"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200308981"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200308981"}]},"ts":"1689200308981"} 2023-07-12 22:18:28,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c 2023-07-12 22:18:28,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c 2023-07-12 22:18:28,983 INFO [StoreOpener-3257e025ab611863497f60b324253164-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3257e025ab611863497f60b324253164 2023-07-12 22:18:28,985 DEBUG [StoreOpener-3257e025ab611863497f60b324253164-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164/f 2023-07-12 22:18:28,985 DEBUG [StoreOpener-3257e025ab611863497f60b324253164-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164/f 2023-07-12 22:18:28,985 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=139 2023-07-12 22:18:28,985 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=139, state=SUCCESS; OpenRegionProcedure 4b636aa986363fb4671dd1c67c493f11, server=jenkins-hbase4.apache.org,44439,1689200283155 in 184 msec 2023-07-12 22:18:28,986 INFO [StoreOpener-3257e025ab611863497f60b324253164-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3257e025ab611863497f60b324253164 columnFamilyName f 2023-07-12 22:18:28,986 INFO [StoreOpener-3257e025ab611863497f60b324253164-1] regionserver.HStore(310): Store=3257e025ab611863497f60b324253164/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:28,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8eef4b75514efb5cb9ffa8e905feec1c 2023-07-12 22:18:28,986 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4b636aa986363fb4671dd1c67c493f11, ASSIGN in 348 msec 2023-07-12 22:18:28,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164 2023-07-12 22:18:28,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164 2023-07-12 22:18:28,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:28,990 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8eef4b75514efb5cb9ffa8e905feec1c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11861338080, jitterRate=0.10467319190502167}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:28,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8eef4b75514efb5cb9ffa8e905feec1c: 2023-07-12 22:18:28,991 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c., pid=143, masterSystemTime=1689200308948 2023-07-12 22:18:28,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3257e025ab611863497f60b324253164 2023-07-12 22:18:28,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c. 2023-07-12 22:18:28,992 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c. 2023-07-12 22:18:28,992 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d. 2023-07-12 22:18:28,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5f9b5450444fc4244814fba2a3ab362d, NAME => 'Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 22:18:28,993 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=8eef4b75514efb5cb9ffa8e905feec1c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:28,993 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689200308993"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200308993"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200308993"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200308993"}]},"ts":"1689200308993"} 2023-07-12 22:18:28,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 5f9b5450444fc4244814fba2a3ab362d 2023-07-12 22:18:28,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:28,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5f9b5450444fc4244814fba2a3ab362d 2023-07-12 22:18:28,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5f9b5450444fc4244814fba2a3ab362d 2023-07-12 22:18:28,995 INFO [StoreOpener-5f9b5450444fc4244814fba2a3ab362d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5f9b5450444fc4244814fba2a3ab362d 2023-07-12 22:18:28,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:28,996 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3257e025ab611863497f60b324253164; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11418635520, jitterRate=0.06344330310821533}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:28,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3257e025ab611863497f60b324253164: 2023-07-12 22:18:28,997 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164., pid=145, masterSystemTime=1689200308954 2023-07-12 22:18:28,997 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=136 2023-07-12 22:18:28,997 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=136, state=SUCCESS; OpenRegionProcedure 8eef4b75514efb5cb9ffa8e905feec1c, server=jenkins-hbase4.apache.org,41907,1689200287570 in 197 msec 2023-07-12 22:18:28,998 DEBUG [StoreOpener-5f9b5450444fc4244814fba2a3ab362d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d/f 2023-07-12 22:18:28,998 DEBUG [StoreOpener-5f9b5450444fc4244814fba2a3ab362d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d/f 2023-07-12 22:18:28,999 INFO [StoreOpener-5f9b5450444fc4244814fba2a3ab362d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5f9b5450444fc4244814fba2a3ab362d columnFamilyName f 2023-07-12 22:18:28,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164. 2023-07-12 22:18:28,999 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164. 2023-07-12 22:18:28,999 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8eef4b75514efb5cb9ffa8e905feec1c, ASSIGN in 360 msec 2023-07-12 22:18:28,999 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=3257e025ab611863497f60b324253164, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:28,999 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689200308546.3257e025ab611863497f60b324253164.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200308999"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200308999"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200308999"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200308999"}]},"ts":"1689200308999"} 2023-07-12 22:18:28,999 INFO [StoreOpener-5f9b5450444fc4244814fba2a3ab362d-1] regionserver.HStore(310): Store=5f9b5450444fc4244814fba2a3ab362d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:29,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d 2023-07-12 22:18:29,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d 2023-07-12 22:18:29,003 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=138 2023-07-12 22:18:29,003 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=138, state=SUCCESS; OpenRegionProcedure 3257e025ab611863497f60b324253164, server=jenkins-hbase4.apache.org,44439,1689200283155 in 201 msec 2023-07-12 22:18:29,004 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3257e025ab611863497f60b324253164, ASSIGN in 366 msec 2023-07-12 22:18:29,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5f9b5450444fc4244814fba2a3ab362d 2023-07-12 22:18:29,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:29,007 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5f9b5450444fc4244814fba2a3ab362d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9821264640, jitterRate=-0.08532345294952393}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:29,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5f9b5450444fc4244814fba2a3ab362d: 2023-07-12 22:18:29,008 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d., pid=142, masterSystemTime=1689200308948 2023-07-12 22:18:29,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d. 2023-07-12 22:18:29,010 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d. 2023-07-12 22:18:29,010 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=5f9b5450444fc4244814fba2a3ab362d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:29,010 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200309010"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200309010"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200309010"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200309010"}]},"ts":"1689200309010"} 2023-07-12 22:18:29,012 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=137 2023-07-12 22:18:29,013 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=137, state=SUCCESS; OpenRegionProcedure 5f9b5450444fc4244814fba2a3ab362d, server=jenkins-hbase4.apache.org,41907,1689200287570 in 214 msec 2023-07-12 22:18:29,014 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=135 2023-07-12 22:18:29,014 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5f9b5450444fc4244814fba2a3ab362d, ASSIGN in 376 msec 2023-07-12 22:18:29,015 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:29,015 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200309015"}]},"ts":"1689200309015"} 2023-07-12 22:18:29,016 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-12 22:18:29,018 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:29,019 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 471 msec 2023-07-12 22:18:29,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-12 22:18:29,153 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 135 completed 2023-07-12 22:18:29,154 DEBUG [Listener at localhost/40739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-12 22:18:29,154 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:29,157 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-12 22:18:29,158 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:29,158 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-12 22:18:29,158 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:29,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-12 22:18:29,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:29,165 INFO [Listener at localhost/40739] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-12 22:18:29,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-12 22:18:29,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-12 22:18:29,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-12 22:18:29,170 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200309170"}]},"ts":"1689200309170"} 2023-07-12 22:18:29,171 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-12 22:18:29,173 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-12 22:18:29,174 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8eef4b75514efb5cb9ffa8e905feec1c, UNASSIGN}, {pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5f9b5450444fc4244814fba2a3ab362d, UNASSIGN}, {pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3257e025ab611863497f60b324253164, UNASSIGN}, {pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4b636aa986363fb4671dd1c67c493f11, UNASSIGN}, {pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29bbd03e9a5f3334883a2b7fdcacae34, UNASSIGN}] 2023-07-12 22:18:29,176 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3257e025ab611863497f60b324253164, UNASSIGN 2023-07-12 22:18:29,176 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5f9b5450444fc4244814fba2a3ab362d, UNASSIGN 2023-07-12 22:18:29,176 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8eef4b75514efb5cb9ffa8e905feec1c, UNASSIGN 2023-07-12 22:18:29,176 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4b636aa986363fb4671dd1c67c493f11, UNASSIGN 2023-07-12 22:18:29,176 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29bbd03e9a5f3334883a2b7fdcacae34, UNASSIGN 2023-07-12 22:18:29,176 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=3257e025ab611863497f60b324253164, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:29,177 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=5f9b5450444fc4244814fba2a3ab362d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:29,177 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689200308546.3257e025ab611863497f60b324253164.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200309176"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200309176"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200309176"}]},"ts":"1689200309176"} 2023-07-12 22:18:29,177 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200309177"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200309177"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200309177"}]},"ts":"1689200309177"} 2023-07-12 22:18:29,177 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=8eef4b75514efb5cb9ffa8e905feec1c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:29,177 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=4b636aa986363fb4671dd1c67c493f11, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:29,177 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689200309177"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200309177"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200309177"}]},"ts":"1689200309177"} 2023-07-12 22:18:29,177 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200309177"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200309177"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200309177"}]},"ts":"1689200309177"} 2023-07-12 22:18:29,177 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=29bbd03e9a5f3334883a2b7fdcacae34, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:29,178 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689200309177"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200309177"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200309177"}]},"ts":"1689200309177"} 2023-07-12 22:18:29,178 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=149, state=RUNNABLE; CloseRegionProcedure 3257e025ab611863497f60b324253164, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:29,179 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=148, state=RUNNABLE; CloseRegionProcedure 5f9b5450444fc4244814fba2a3ab362d, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:29,180 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=154, ppid=147, state=RUNNABLE; CloseRegionProcedure 8eef4b75514efb5cb9ffa8e905feec1c, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:29,181 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=150, state=RUNNABLE; CloseRegionProcedure 4b636aa986363fb4671dd1c67c493f11, server=jenkins-hbase4.apache.org,44439,1689200283155}] 2023-07-12 22:18:29,182 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=151, state=RUNNABLE; CloseRegionProcedure 29bbd03e9a5f3334883a2b7fdcacae34, server=jenkins-hbase4.apache.org,41907,1689200287570}] 2023-07-12 22:18:29,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-12 22:18:29,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4b636aa986363fb4671dd1c67c493f11 2023-07-12 22:18:29,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 29bbd03e9a5f3334883a2b7fdcacae34 2023-07-12 22:18:29,333 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4b636aa986363fb4671dd1c67c493f11, disabling compactions & flushes 2023-07-12 22:18:29,333 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 29bbd03e9a5f3334883a2b7fdcacae34, disabling compactions & flushes 2023-07-12 22:18:29,333 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11. 2023-07-12 22:18:29,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11. 2023-07-12 22:18:29,333 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34. 2023-07-12 22:18:29,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11. after waiting 0 ms 2023-07-12 22:18:29,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11. 2023-07-12 22:18:29,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34. 2023-07-12 22:18:29,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34. after waiting 0 ms 2023-07-12 22:18:29,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34. 2023-07-12 22:18:29,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:29,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:29,338 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11. 2023-07-12 22:18:29,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4b636aa986363fb4671dd1c67c493f11: 2023-07-12 22:18:29,338 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34. 2023-07-12 22:18:29,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 29bbd03e9a5f3334883a2b7fdcacae34: 2023-07-12 22:18:29,340 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 29bbd03e9a5f3334883a2b7fdcacae34 2023-07-12 22:18:29,340 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8eef4b75514efb5cb9ffa8e905feec1c 2023-07-12 22:18:29,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8eef4b75514efb5cb9ffa8e905feec1c, disabling compactions & flushes 2023-07-12 22:18:29,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c. 2023-07-12 22:18:29,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c. 2023-07-12 22:18:29,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c. after waiting 0 ms 2023-07-12 22:18:29,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c. 2023-07-12 22:18:29,342 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=29bbd03e9a5f3334883a2b7fdcacae34, regionState=CLOSED 2023-07-12 22:18:29,342 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689200309342"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200309342"}]},"ts":"1689200309342"} 2023-07-12 22:18:29,342 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4b636aa986363fb4671dd1c67c493f11 2023-07-12 22:18:29,342 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3257e025ab611863497f60b324253164 2023-07-12 22:18:29,343 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3257e025ab611863497f60b324253164, disabling compactions & flushes 2023-07-12 22:18:29,343 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164. 2023-07-12 22:18:29,343 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164. 2023-07-12 22:18:29,343 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164. after waiting 0 ms 2023-07-12 22:18:29,343 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164. 2023-07-12 22:18:29,344 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=4b636aa986363fb4671dd1c67c493f11, regionState=CLOSED 2023-07-12 22:18:29,344 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200309343"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200309343"}]},"ts":"1689200309343"} 2023-07-12 22:18:29,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:29,347 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=151 2023-07-12 22:18:29,347 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=151, state=SUCCESS; CloseRegionProcedure 29bbd03e9a5f3334883a2b7fdcacae34, server=jenkins-hbase4.apache.org,41907,1689200287570 in 162 msec 2023-07-12 22:18:29,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:29,347 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c. 2023-07-12 22:18:29,348 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8eef4b75514efb5cb9ffa8e905feec1c: 2023-07-12 22:18:29,348 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=150 2023-07-12 22:18:29,348 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=150, state=SUCCESS; CloseRegionProcedure 4b636aa986363fb4671dd1c67c493f11, server=jenkins-hbase4.apache.org,44439,1689200283155 in 164 msec 2023-07-12 22:18:29,348 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164. 2023-07-12 22:18:29,348 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3257e025ab611863497f60b324253164: 2023-07-12 22:18:29,350 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8eef4b75514efb5cb9ffa8e905feec1c 2023-07-12 22:18:29,350 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5f9b5450444fc4244814fba2a3ab362d 2023-07-12 22:18:29,351 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5f9b5450444fc4244814fba2a3ab362d, disabling compactions & flushes 2023-07-12 22:18:29,351 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d. 2023-07-12 22:18:29,351 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d. 2023-07-12 22:18:29,351 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d. after waiting 0 ms 2023-07-12 22:18:29,351 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d. 2023-07-12 22:18:29,352 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=29bbd03e9a5f3334883a2b7fdcacae34, UNASSIGN in 173 msec 2023-07-12 22:18:29,352 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4b636aa986363fb4671dd1c67c493f11, UNASSIGN in 174 msec 2023-07-12 22:18:29,352 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3257e025ab611863497f60b324253164 2023-07-12 22:18:29,352 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=8eef4b75514efb5cb9ffa8e905feec1c, regionState=CLOSED 2023-07-12 22:18:29,352 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689200309352"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200309352"}]},"ts":"1689200309352"} 2023-07-12 22:18:29,353 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=3257e025ab611863497f60b324253164, regionState=CLOSED 2023-07-12 22:18:29,353 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689200308546.3257e025ab611863497f60b324253164.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200309353"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200309353"}]},"ts":"1689200309353"} 2023-07-12 22:18:29,356 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=154, resume processing ppid=147 2023-07-12 22:18:29,356 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=147, state=SUCCESS; CloseRegionProcedure 8eef4b75514efb5cb9ffa8e905feec1c, server=jenkins-hbase4.apache.org,41907,1689200287570 in 174 msec 2023-07-12 22:18:29,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:29,357 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d. 2023-07-12 22:18:29,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5f9b5450444fc4244814fba2a3ab362d: 2023-07-12 22:18:29,358 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=149 2023-07-12 22:18:29,358 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=149, state=SUCCESS; CloseRegionProcedure 3257e025ab611863497f60b324253164, server=jenkins-hbase4.apache.org,44439,1689200283155 in 177 msec 2023-07-12 22:18:29,358 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5f9b5450444fc4244814fba2a3ab362d 2023-07-12 22:18:29,359 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8eef4b75514efb5cb9ffa8e905feec1c, UNASSIGN in 182 msec 2023-07-12 22:18:29,359 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=5f9b5450444fc4244814fba2a3ab362d, regionState=CLOSED 2023-07-12 22:18:29,359 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689200309359"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200309359"}]},"ts":"1689200309359"} 2023-07-12 22:18:29,359 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3257e025ab611863497f60b324253164, UNASSIGN in 184 msec 2023-07-12 22:18:29,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=148 2023-07-12 22:18:29,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=148, state=SUCCESS; CloseRegionProcedure 5f9b5450444fc4244814fba2a3ab362d, server=jenkins-hbase4.apache.org,41907,1689200287570 in 181 msec 2023-07-12 22:18:29,363 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=146 2023-07-12 22:18:29,363 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5f9b5450444fc4244814fba2a3ab362d, UNASSIGN in 187 msec 2023-07-12 22:18:29,367 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200309367"}]},"ts":"1689200309367"} 2023-07-12 22:18:29,368 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-12 22:18:29,370 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-12 22:18:29,372 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 205 msec 2023-07-12 22:18:29,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-12 22:18:29,472 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 146 completed 2023-07-12 22:18:29,472 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_2132534156 2023-07-12 22:18:29,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_2132534156 2023-07-12 22:18:29,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:29,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2132534156 2023-07-12 22:18:29,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:29,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:29,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-12 22:18:29,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_2132534156, current retry=0 2023-07-12 22:18:29,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_2132534156. 2023-07-12 22:18:29,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:29,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:29,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:29,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-12 22:18:29,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:29,485 INFO [Listener at localhost/40739] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-12 22:18:29,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-12 22:18:29,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:29,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 919 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:45482 deadline: 1689200369486, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-12 22:18:29,487 DEBUG [Listener at localhost/40739] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-12 22:18:29,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-12 22:18:29,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] procedure2.ProcedureExecutor(1029): Stored pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 22:18:29,490 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 22:18:29,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_2132534156' 2023-07-12 22:18:29,491 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=158, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 22:18:29,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:29,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2132534156 2023-07-12 22:18:29,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:29,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:29,499 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c 2023-07-12 22:18:29,499 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34 2023-07-12 22:18:29,499 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d 2023-07-12 22:18:29,499 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11 2023-07-12 22:18:29,499 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164 2023-07-12 22:18:29,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-12 22:18:29,502 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c/recovered.edits] 2023-07-12 22:18:29,502 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164/recovered.edits] 2023-07-12 22:18:29,502 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11/recovered.edits] 2023-07-12 22:18:29,503 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34/recovered.edits] 2023-07-12 22:18:29,503 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d/f, FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d/recovered.edits] 2023-07-12 22:18:29,512 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c/recovered.edits/4.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c/recovered.edits/4.seqid 2023-07-12 22:18:29,513 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11/recovered.edits/4.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11/recovered.edits/4.seqid 2023-07-12 22:18:29,513 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d/recovered.edits/4.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d/recovered.edits/4.seqid 2023-07-12 22:18:29,513 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164/recovered.edits/4.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164/recovered.edits/4.seqid 2023-07-12 22:18:29,514 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34/recovered.edits/4.seqid to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/archive/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34/recovered.edits/4.seqid 2023-07-12 22:18:29,514 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/8eef4b75514efb5cb9ffa8e905feec1c 2023-07-12 22:18:29,514 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/4b636aa986363fb4671dd1c67c493f11 2023-07-12 22:18:29,515 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/5f9b5450444fc4244814fba2a3ab362d 2023-07-12 22:18:29,515 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/3257e025ab611863497f60b324253164 2023-07-12 22:18:29,515 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/.tmp/data/default/Group_testDisabledTableMove/29bbd03e9a5f3334883a2b7fdcacae34 2023-07-12 22:18:29,515 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-12 22:18:29,518 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=158, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 22:18:29,521 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-12 22:18:29,526 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-12 22:18:29,527 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=158, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 22:18:29,527 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-12 22:18:29,528 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200309527"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:29,528 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200309527"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:29,528 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689200308546.3257e025ab611863497f60b324253164.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200309527"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:29,528 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200309527"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:29,528 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200309527"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:29,529 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 22:18:29,530 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8eef4b75514efb5cb9ffa8e905feec1c, NAME => 'Group_testDisabledTableMove,,1689200308546.8eef4b75514efb5cb9ffa8e905feec1c.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 5f9b5450444fc4244814fba2a3ab362d, NAME => 'Group_testDisabledTableMove,aaaaa,1689200308546.5f9b5450444fc4244814fba2a3ab362d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 3257e025ab611863497f60b324253164, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689200308546.3257e025ab611863497f60b324253164.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 4b636aa986363fb4671dd1c67c493f11, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689200308546.4b636aa986363fb4671dd1c67c493f11.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 29bbd03e9a5f3334883a2b7fdcacae34, NAME => 'Group_testDisabledTableMove,zzzzz,1689200308546.29bbd03e9a5f3334883a2b7fdcacae34.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 22:18:29,530 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-12 22:18:29,530 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689200309530"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:29,531 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-12 22:18:29,534 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=158, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 22:18:29,535 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=158, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 47 msec 2023-07-12 22:18:29,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-12 22:18:29,602 INFO [Listener at localhost/40739] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 158 completed 2023-07-12 22:18:29,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:29,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:29,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:29,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:29,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:29,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:37441] to rsgroup default 2023-07-12 22:18:29,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:29,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2132534156 2023-07-12 22:18:29,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:29,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:29,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_2132534156, current retry=0 2023-07-12 22:18:29,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1689200282765, jenkins-hbase4.apache.org,41059,1689200282965] are moved back to Group_testDisabledTableMove_2132534156 2023-07-12 22:18:29,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_2132534156 => default 2023-07-12 22:18:29,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:29,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_2132534156 2023-07-12 22:18:29,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:29,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:29,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 22:18:29,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:29,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:29,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:29,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:29,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:29,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:29,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:29,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:29,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:29,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:29,629 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:29,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:29,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:29,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:29,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:29,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:29,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:29,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:29,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:29,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:29,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 953 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201509640, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:29,641 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:29,643 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:29,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:29,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:29,644 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:29,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:29,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:29,664 INFO [Listener at localhost/40739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=515 (was 513) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b822857-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1322032218_17 at /127.0.0.1:33012 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=799 (was 776) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=376 (was 376), ProcessCount=175 (was 173) - ProcessCount LEAK? -, AvailableMemoryMB=6677 (was 6608) - AvailableMemoryMB LEAK? - 2023-07-12 22:18:29,664 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-12 22:18:29,680 INFO [Listener at localhost/40739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=515, OpenFileDescriptor=799, MaxFileDescriptor=60000, SystemLoadAverage=376, ProcessCount=175, AvailableMemoryMB=6693 2023-07-12 22:18:29,680 WARN [Listener at localhost/40739] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-12 22:18:29,680 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-12 22:18:29,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:29,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:29,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:29,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:29,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:29,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:29,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:29,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:29,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:29,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:29,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:29,693 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:29,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:29,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:29,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:29,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:29,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:29,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:29,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:29,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34283] to rsgroup master 2023-07-12 22:18:29,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:29,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] ipc.CallRunner(144): callId: 981 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45482 deadline: 1689201509706, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. 2023-07-12 22:18:29,707 WARN [Listener at localhost/40739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:29,708 INFO [Listener at localhost/40739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:29,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:29,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:29,709 INFO [Listener at localhost/40739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41059, jenkins-hbase4.apache.org:41907, jenkins-hbase4.apache.org:44439], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:29,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:29,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:29,710 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 22:18:29,710 INFO [Listener at localhost/40739] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 22:18:29,711 DEBUG [Listener at localhost/40739] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3041a83e to 127.0.0.1:59420 2023-07-12 22:18:29,711 DEBUG [Listener at localhost/40739] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:29,712 DEBUG [Listener at localhost/40739] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 22:18:29,712 DEBUG [Listener at localhost/40739] util.JVMClusterUtil(257): Found active master hash=1794907313, stopped=false 2023-07-12 22:18:29,713 DEBUG [Listener at localhost/40739] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 22:18:29,713 DEBUG [Listener at localhost/40739] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 22:18:29,713 INFO [Listener at localhost/40739] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34283,1689200280641 2023-07-12 22:18:29,715 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:29,715 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:29,715 INFO [Listener at localhost/40739] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 22:18:29,715 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:29,715 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:29,715 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:29,715 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:29,716 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:29,716 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:29,716 DEBUG [Listener at localhost/40739] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7036f813 to 127.0.0.1:59420 2023-07-12 22:18:29,716 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:29,716 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:29,716 DEBUG [Listener at localhost/40739] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:29,716 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:29,717 INFO [Listener at localhost/40739] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37441,1689200282765' ***** 2023-07-12 22:18:29,717 INFO [Listener at localhost/40739] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 22:18:29,717 INFO [Listener at localhost/40739] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41059,1689200282965' ***** 2023-07-12 22:18:29,717 INFO [Listener at localhost/40739] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 22:18:29,717 INFO [Listener at localhost/40739] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44439,1689200283155' ***** 2023-07-12 22:18:29,717 INFO [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:29,717 INFO [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:29,719 INFO [Listener at localhost/40739] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 22:18:29,719 INFO [Listener at localhost/40739] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41907,1689200287570' ***** 2023-07-12 22:18:29,719 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:29,719 INFO [Listener at localhost/40739] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 22:18:29,720 INFO [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:29,735 INFO [RS:2;jenkins-hbase4:44439] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@41710862{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:29,735 INFO [RS:3;jenkins-hbase4:41907] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@57235bf5{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:29,735 INFO [RS:1;jenkins-hbase4:41059] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@65f8279a{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:29,735 INFO [RS:0;jenkins-hbase4:37441] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3985421f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:29,739 INFO [RS:1;jenkins-hbase4:41059] server.AbstractConnector(383): Stopped ServerConnector@5e7280c7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:29,739 INFO [RS:0;jenkins-hbase4:37441] server.AbstractConnector(383): Stopped ServerConnector@5c7f14cd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:29,739 INFO [RS:1;jenkins-hbase4:41059] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:29,739 INFO [RS:3;jenkins-hbase4:41907] server.AbstractConnector(383): Stopped ServerConnector@4f2d7206{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:29,739 INFO [RS:0;jenkins-hbase4:37441] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:29,740 INFO [RS:2;jenkins-hbase4:44439] server.AbstractConnector(383): Stopped ServerConnector@43e2a6e5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:29,740 INFO [RS:3;jenkins-hbase4:41907] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:29,741 INFO [RS:0;jenkins-hbase4:37441] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4d5f0290{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:29,740 INFO [RS:1;jenkins-hbase4:41059] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1e5ca88b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:29,742 INFO [RS:0;jenkins-hbase4:37441] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@36760574{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:29,742 INFO [RS:3;jenkins-hbase4:41907] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@84f5bbc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:29,743 INFO [RS:1;jenkins-hbase4:41059] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4124822{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:29,744 INFO [RS:3;jenkins-hbase4:41907] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@700b9517{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:29,741 INFO [RS:2;jenkins-hbase4:44439] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:29,744 INFO [RS:2;jenkins-hbase4:44439] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@53ca0225{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:29,745 INFO [RS:2;jenkins-hbase4:44439] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@618603ab{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:29,747 INFO [RS:2;jenkins-hbase4:44439] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 22:18:29,747 INFO [RS:3;jenkins-hbase4:41907] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 22:18:29,747 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 22:18:29,747 INFO [RS:0;jenkins-hbase4:37441] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 22:18:29,747 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 22:18:29,747 INFO [RS:0;jenkins-hbase4:37441] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 22:18:29,747 INFO [RS:2;jenkins-hbase4:44439] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 22:18:29,748 INFO [RS:0;jenkins-hbase4:37441] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 22:18:29,748 INFO [RS:2;jenkins-hbase4:44439] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 22:18:29,747 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 22:18:29,748 INFO [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:29,748 INFO [RS:3;jenkins-hbase4:41907] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 22:18:29,748 INFO [RS:1;jenkins-hbase4:41059] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 22:18:29,748 INFO [RS:3;jenkins-hbase4:41907] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 22:18:29,748 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(3305): Received CLOSE for 014ce980d8eb773efb72cff5eb62d9a2 2023-07-12 22:18:29,748 DEBUG [RS:0;jenkins-hbase4:37441] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5c7f7420 to 127.0.0.1:59420 2023-07-12 22:18:29,748 INFO [RS:1;jenkins-hbase4:41059] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 22:18:29,748 INFO [RS:1;jenkins-hbase4:41059] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 22:18:29,748 INFO [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(3305): Received CLOSE for 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:29,748 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 22:18:29,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 014ce980d8eb773efb72cff5eb62d9a2, disabling compactions & flushes 2023-07-12 22:18:29,749 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:29,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:29,750 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(3305): Received CLOSE for 5dadbef7ea97919927df58525570971d 2023-07-12 22:18:29,749 DEBUG [RS:0;jenkins-hbase4:37441] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:29,750 INFO [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37441,1689200282765; all regions closed. 2023-07-12 22:18:29,749 INFO [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:29,750 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(3305): Received CLOSE for e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:29,750 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. after waiting 0 ms 2023-07-12 22:18:29,749 INFO [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:29,750 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9603db1b4bdf68cb3fd6350c6fcf3433, disabling compactions & flushes 2023-07-12 22:18:29,751 DEBUG [RS:3;jenkins-hbase4:41907] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4ffe55ff to 127.0.0.1:59420 2023-07-12 22:18:29,751 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:29,751 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:29,751 DEBUG [RS:2;jenkins-hbase4:44439] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x43e189ab to 127.0.0.1:59420 2023-07-12 22:18:29,751 DEBUG [RS:1;jenkins-hbase4:41059] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x13a61d18 to 127.0.0.1:59420 2023-07-12 22:18:29,751 DEBUG [RS:3;jenkins-hbase4:41907] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:29,751 DEBUG [RS:2;jenkins-hbase4:44439] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:29,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 014ce980d8eb773efb72cff5eb62d9a2 1/1 column families, dataSize=27.06 KB heapSize=44.65 KB 2023-07-12 22:18:29,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:29,751 INFO [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 22:18:29,751 DEBUG [RS:1;jenkins-hbase4:41059] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:29,751 DEBUG [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(1478): Online Regions={9603db1b4bdf68cb3fd6350c6fcf3433=testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433.} 2023-07-12 22:18:29,751 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:29,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. after waiting 0 ms 2023-07-12 22:18:29,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:29,752 INFO [RS:2;jenkins-hbase4:44439] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 22:18:29,752 INFO [RS:2;jenkins-hbase4:44439] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 22:18:29,752 INFO [RS:2;jenkins-hbase4:44439] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 22:18:29,752 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 22:18:29,751 INFO [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41059,1689200282965; all regions closed. 2023-07-12 22:18:29,752 DEBUG [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(1504): Waiting on 9603db1b4bdf68cb3fd6350c6fcf3433 2023-07-12 22:18:29,758 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-12 22:18:29,759 DEBUG [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1478): Online Regions={014ce980d8eb773efb72cff5eb62d9a2=hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2., 5dadbef7ea97919927df58525570971d=hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d., 1588230740=hbase:meta,,1.1588230740, e434966f28850b76223c1f3ef1ceaf0c=unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c.} 2023-07-12 22:18:29,759 DEBUG [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1504): Waiting on 014ce980d8eb773efb72cff5eb62d9a2, 1588230740, 5dadbef7ea97919927df58525570971d, e434966f28850b76223c1f3ef1ceaf0c 2023-07-12 22:18:29,759 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 22:18:29,759 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 22:18:29,759 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 22:18:29,759 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 22:18:29,759 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 22:18:29,759 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=78.12 KB heapSize=122.99 KB 2023-07-12 22:18:29,762 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:29,762 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:29,771 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:29,771 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:29,775 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/testRename/9603db1b4bdf68cb3fd6350c6fcf3433/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 22:18:29,781 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:29,781 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9603db1b4bdf68cb3fd6350c6fcf3433: 2023-07-12 22:18:29,781 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689200302933.9603db1b4bdf68cb3fd6350c6fcf3433. 2023-07-12 22:18:29,791 DEBUG [RS:0;jenkins-hbase4:37441] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/oldWALs 2023-07-12 22:18:29,791 DEBUG [RS:1;jenkins-hbase4:41059] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/oldWALs 2023-07-12 22:18:29,791 INFO [RS:1;jenkins-hbase4:41059] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41059%2C1689200282965:(num 1689200285593) 2023-07-12 22:18:29,791 INFO [RS:0;jenkins-hbase4:37441] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37441%2C1689200282765.meta:.meta(num 1689200285918) 2023-07-12 22:18:29,791 DEBUG [RS:1;jenkins-hbase4:41059] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:29,792 INFO [RS:1;jenkins-hbase4:41059] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:29,799 INFO [RS:1;jenkins-hbase4:41059] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 22:18:29,799 INFO [RS:1;jenkins-hbase4:41059] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 22:18:29,799 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:29,799 INFO [RS:1;jenkins-hbase4:41059] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 22:18:29,800 INFO [RS:1;jenkins-hbase4:41059] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 22:18:29,801 INFO [RS:1;jenkins-hbase4:41059] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41059 2023-07-12 22:18:29,819 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.06 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/.tmp/m/ee4ea0746cb04f418720dd90af7b2a1f 2023-07-12 22:18:29,823 DEBUG [RS:0;jenkins-hbase4:37441] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/oldWALs 2023-07-12 22:18:29,823 INFO [RS:0;jenkins-hbase4:37441] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37441%2C1689200282765:(num 1689200285593) 2023-07-12 22:18:29,823 DEBUG [RS:0;jenkins-hbase4:37441] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:29,823 INFO [RS:0;jenkins-hbase4:37441] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:29,841 INFO [RS:0;jenkins-hbase4:37441] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 22:18:29,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ee4ea0746cb04f418720dd90af7b2a1f 2023-07-12 22:18:29,842 INFO [RS:0;jenkins-hbase4:37441] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 22:18:29,843 INFO [RS:0;jenkins-hbase4:37441] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 22:18:29,843 INFO [RS:0;jenkins-hbase4:37441] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 22:18:29,843 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:29,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/.tmp/m/ee4ea0746cb04f418720dd90af7b2a1f as hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/m/ee4ea0746cb04f418720dd90af7b2a1f 2023-07-12 22:18:29,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ee4ea0746cb04f418720dd90af7b2a1f 2023-07-12 22:18:29,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/m/ee4ea0746cb04f418720dd90af7b2a1f, entries=28, sequenceid=101, filesize=6.1 K 2023-07-12 22:18:29,852 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.06 KB/27705, heapSize ~44.63 KB/45704, currentSize=0 B/0 for 014ce980d8eb773efb72cff5eb62d9a2 in 101ms, sequenceid=101, compaction requested=false 2023-07-12 22:18:29,855 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:29,855 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:29,855 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:29,855 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:29,855 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:29,855 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:29,855 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41059,1689200282965 2023-07-12 22:18:29,856 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:29,856 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:29,856 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41059,1689200282965] 2023-07-12 22:18:29,856 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41059,1689200282965; numProcessing=1 2023-07-12 22:18:29,856 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 22:18:29,856 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 22:18:29,857 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41059,1689200282965 already deleted, retry=false 2023-07-12 22:18:29,858 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41059,1689200282965 expired; onlineServers=3 2023-07-12 22:18:29,867 INFO [RS:0;jenkins-hbase4:37441] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37441 2023-07-12 22:18:29,876 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:29,876 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:29,876 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:29,877 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37441,1689200282765 2023-07-12 22:18:29,877 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37441,1689200282765] 2023-07-12 22:18:29,877 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37441,1689200282765; numProcessing=2 2023-07-12 22:18:29,878 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37441,1689200282765 already deleted, retry=false 2023-07-12 22:18:29,878 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37441,1689200282765 expired; onlineServers=2 2023-07-12 22:18:29,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/rsgroup/014ce980d8eb773efb72cff5eb62d9a2/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-12 22:18:29,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 22:18:29,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:29,885 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 014ce980d8eb773efb72cff5eb62d9a2: 2023-07-12 22:18:29,885 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689200286280.014ce980d8eb773efb72cff5eb62d9a2. 2023-07-12 22:18:29,885 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5dadbef7ea97919927df58525570971d, disabling compactions & flushes 2023-07-12 22:18:29,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:29,885 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:29,885 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. after waiting 0 ms 2023-07-12 22:18:29,885 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:29,895 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=72.31 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/.tmp/info/f315bace95f74a17b24748e7ba8c8f15 2023-07-12 22:18:29,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/namespace/5dadbef7ea97919927df58525570971d/recovered.edits/15.seqid, newMaxSeqId=15, maxSeqId=12 2023-07-12 22:18:29,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:29,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5dadbef7ea97919927df58525570971d: 2023-07-12 22:18:29,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689200286163.5dadbef7ea97919927df58525570971d. 2023-07-12 22:18:29,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e434966f28850b76223c1f3ef1ceaf0c, disabling compactions & flushes 2023-07-12 22:18:29,905 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:29,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:29,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. after waiting 0 ms 2023-07-12 22:18:29,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:29,906 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f315bace95f74a17b24748e7ba8c8f15 2023-07-12 22:18:29,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/default/unmovedTable/e434966f28850b76223c1f3ef1ceaf0c/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 22:18:29,924 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:29,924 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e434966f28850b76223c1f3ef1ceaf0c: 2023-07-12 22:18:29,924 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689200304589.e434966f28850b76223c1f3ef1ceaf0c. 2023-07-12 22:18:29,931 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/.tmp/rep_barrier/60d540b6045d43e1a27905018b3e56e1 2023-07-12 22:18:29,937 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 60d540b6045d43e1a27905018b3e56e1 2023-07-12 22:18:29,953 INFO [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41907,1689200287570; all regions closed. 2023-07-12 22:18:29,959 DEBUG [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 22:18:29,965 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.81 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/.tmp/table/d246dc23b6ab4071880d3898823ec41b 2023-07-12 22:18:29,967 DEBUG [RS:3;jenkins-hbase4:41907] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/oldWALs 2023-07-12 22:18:29,967 INFO [RS:3;jenkins-hbase4:41907] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41907%2C1689200287570:(num 1689200287954) 2023-07-12 22:18:29,967 DEBUG [RS:3;jenkins-hbase4:41907] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:29,967 INFO [RS:3;jenkins-hbase4:41907] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:29,967 INFO [RS:3;jenkins-hbase4:41907] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 22:18:29,968 INFO [RS:3;jenkins-hbase4:41907] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 22:18:29,968 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:29,968 INFO [RS:3;jenkins-hbase4:41907] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 22:18:29,968 INFO [RS:3;jenkins-hbase4:41907] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 22:18:29,969 INFO [RS:3;jenkins-hbase4:41907] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41907 2023-07-12 22:18:29,973 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d246dc23b6ab4071880d3898823ec41b 2023-07-12 22:18:29,974 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/.tmp/info/f315bace95f74a17b24748e7ba8c8f15 as hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/info/f315bace95f74a17b24748e7ba8c8f15 2023-07-12 22:18:29,980 INFO [RS:0;jenkins-hbase4:37441] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37441,1689200282765; zookeeper connection closed. 2023-07-12 22:18:29,980 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:29,980 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1015b9d43b70001, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:29,981 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5df95a29] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5df95a29 2023-07-12 22:18:29,983 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f315bace95f74a17b24748e7ba8c8f15 2023-07-12 22:18:29,983 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/info/f315bace95f74a17b24748e7ba8c8f15, entries=97, sequenceid=214, filesize=16.0 K 2023-07-12 22:18:29,986 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:29,987 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/.tmp/rep_barrier/60d540b6045d43e1a27905018b3e56e1 as hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/rep_barrier/60d540b6045d43e1a27905018b3e56e1 2023-07-12 22:18:29,988 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:29,988 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41907,1689200287570 2023-07-12 22:18:29,989 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41907,1689200287570] 2023-07-12 22:18:29,989 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41907,1689200287570; numProcessing=3 2023-07-12 22:18:29,996 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41907,1689200287570 already deleted, retry=false 2023-07-12 22:18:29,996 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41907,1689200287570 expired; onlineServers=1 2023-07-12 22:18:30,009 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 60d540b6045d43e1a27905018b3e56e1 2023-07-12 22:18:30,009 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/rep_barrier/60d540b6045d43e1a27905018b3e56e1, entries=18, sequenceid=214, filesize=6.9 K 2023-07-12 22:18:30,020 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/.tmp/table/d246dc23b6ab4071880d3898823ec41b as hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/table/d246dc23b6ab4071880d3898823ec41b 2023-07-12 22:18:30,032 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d246dc23b6ab4071880d3898823ec41b 2023-07-12 22:18:30,032 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/table/d246dc23b6ab4071880d3898823ec41b, entries=27, sequenceid=214, filesize=7.2 K 2023-07-12 22:18:30,033 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78.12 KB/79997, heapSize ~122.95 KB/125896, currentSize=0 B/0 for 1588230740 in 274ms, sequenceid=214, compaction requested=false 2023-07-12 22:18:30,058 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/data/hbase/meta/1588230740/recovered.edits/217.seqid, newMaxSeqId=217, maxSeqId=18 2023-07-12 22:18:30,059 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 22:18:30,063 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 22:18:30,063 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 22:18:30,064 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 22:18:30,096 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:30,096 INFO [RS:3;jenkins-hbase4:41907] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41907,1689200287570; zookeeper connection closed. 2023-07-12 22:18:30,097 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41907-0x1015b9d43b7000b, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:30,097 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2c345f93] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2c345f93 2023-07-12 22:18:30,159 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44439,1689200283155; all regions closed. 2023-07-12 22:18:30,171 DEBUG [RS:2;jenkins-hbase4:44439] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/oldWALs 2023-07-12 22:18:30,171 INFO [RS:2;jenkins-hbase4:44439] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44439%2C1689200283155.meta:.meta(num 1689200288672) 2023-07-12 22:18:30,179 DEBUG [RS:2;jenkins-hbase4:44439] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/oldWALs 2023-07-12 22:18:30,179 INFO [RS:2;jenkins-hbase4:44439] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44439%2C1689200283155:(num 1689200285592) 2023-07-12 22:18:30,179 DEBUG [RS:2;jenkins-hbase4:44439] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:30,179 INFO [RS:2;jenkins-hbase4:44439] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:30,179 INFO [RS:2;jenkins-hbase4:44439] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 22:18:30,179 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:30,180 INFO [RS:2;jenkins-hbase4:44439] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44439 2023-07-12 22:18:30,182 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44439,1689200283155 2023-07-12 22:18:30,182 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:30,183 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44439,1689200283155] 2023-07-12 22:18:30,183 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44439,1689200283155; numProcessing=4 2023-07-12 22:18:30,186 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44439,1689200283155 already deleted, retry=false 2023-07-12 22:18:30,186 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44439,1689200283155 expired; onlineServers=0 2023-07-12 22:18:30,186 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34283,1689200280641' ***** 2023-07-12 22:18:30,187 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 22:18:30,187 DEBUG [M:0;jenkins-hbase4:34283] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2bc1168a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:30,187 INFO [M:0;jenkins-hbase4:34283] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:30,191 INFO [M:0;jenkins-hbase4:34283] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5aa9eede{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 22:18:30,191 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:30,191 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:30,191 INFO [M:0;jenkins-hbase4:34283] server.AbstractConnector(383): Stopped ServerConnector@3348b71b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:30,191 INFO [M:0;jenkins-hbase4:34283] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:30,191 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:30,192 INFO [M:0;jenkins-hbase4:34283] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4d3eef7e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:30,193 INFO [M:0;jenkins-hbase4:34283] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@67509e5d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:30,193 INFO [M:0;jenkins-hbase4:34283] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34283,1689200280641 2023-07-12 22:18:30,193 INFO [M:0;jenkins-hbase4:34283] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34283,1689200280641; all regions closed. 2023-07-12 22:18:30,193 DEBUG [M:0;jenkins-hbase4:34283] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:30,193 INFO [M:0;jenkins-hbase4:34283] master.HMaster(1491): Stopping master jetty server 2023-07-12 22:18:30,194 INFO [M:0;jenkins-hbase4:34283] server.AbstractConnector(383): Stopped ServerConnector@7c96907d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:30,195 DEBUG [M:0;jenkins-hbase4:34283] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 22:18:30,195 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 22:18:30,195 DEBUG [M:0;jenkins-hbase4:34283] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 22:18:30,195 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689200285201] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689200285201,5,FailOnTimeoutGroup] 2023-07-12 22:18:30,195 INFO [M:0;jenkins-hbase4:34283] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 22:18:30,195 INFO [M:0;jenkins-hbase4:34283] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 22:18:30,195 INFO [M:0;jenkins-hbase4:34283] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-12 22:18:30,195 DEBUG [M:0;jenkins-hbase4:34283] master.HMaster(1512): Stopping service threads 2023-07-12 22:18:30,195 INFO [M:0;jenkins-hbase4:34283] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 22:18:30,196 ERROR [M:0;jenkins-hbase4:34283] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-12 22:18:30,196 INFO [M:0;jenkins-hbase4:34283] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 22:18:30,197 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 22:18:30,197 DEBUG [M:0;jenkins-hbase4:34283] zookeeper.ZKUtil(398): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 22:18:30,197 WARN [M:0;jenkins-hbase4:34283] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 22:18:30,197 INFO [M:0;jenkins-hbase4:34283] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 22:18:30,198 INFO [M:0;jenkins-hbase4:34283] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 22:18:30,198 DEBUG [M:0;jenkins-hbase4:34283] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 22:18:30,198 INFO [M:0;jenkins-hbase4:34283] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:30,198 DEBUG [M:0;jenkins-hbase4:34283] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:30,198 DEBUG [M:0;jenkins-hbase4:34283] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 22:18:30,198 DEBUG [M:0;jenkins-hbase4:34283] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:30,198 INFO [M:0;jenkins-hbase4:34283] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=528.72 KB heapSize=632.86 KB 2023-07-12 22:18:30,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689200285201] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689200285201,5,FailOnTimeoutGroup] 2023-07-12 22:18:30,219 INFO [M:0;jenkins-hbase4:34283] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=528.72 KB at sequenceid=1176 (bloomFilter=true), to=hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9a1a494390af455d9fa4ccb6f797d6ea 2023-07-12 22:18:30,226 DEBUG [M:0;jenkins-hbase4:34283] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9a1a494390af455d9fa4ccb6f797d6ea as hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9a1a494390af455d9fa4ccb6f797d6ea 2023-07-12 22:18:30,235 INFO [M:0;jenkins-hbase4:34283] regionserver.HStore(1080): Added hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9a1a494390af455d9fa4ccb6f797d6ea, entries=157, sequenceid=1176, filesize=27.6 K 2023-07-12 22:18:30,236 INFO [M:0;jenkins-hbase4:34283] regionserver.HRegion(2948): Finished flush of dataSize ~528.72 KB/541406, heapSize ~632.84 KB/648032, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 38ms, sequenceid=1176, compaction requested=false 2023-07-12 22:18:30,248 INFO [M:0;jenkins-hbase4:34283] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:30,248 DEBUG [M:0;jenkins-hbase4:34283] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 22:18:30,259 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:30,259 INFO [M:0;jenkins-hbase4:34283] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 22:18:30,259 INFO [M:0;jenkins-hbase4:34283] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34283 2023-07-12 22:18:30,261 DEBUG [M:0;jenkins-hbase4:34283] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34283,1689200280641 already deleted, retry=false 2023-07-12 22:18:30,318 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:30,318 INFO [RS:2;jenkins-hbase4:44439] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44439,1689200283155; zookeeper connection closed. 2023-07-12 22:18:30,318 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:44439-0x1015b9d43b70003, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:30,318 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@d70c03a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@d70c03a 2023-07-12 22:18:30,418 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:30,418 INFO [RS:1;jenkins-hbase4:41059] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41059,1689200282965; zookeeper connection closed. 2023-07-12 22:18:30,418 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): regionserver:41059-0x1015b9d43b70002, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:30,418 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2a20089d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2a20089d 2023-07-12 22:18:30,419 INFO [Listener at localhost/40739] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-12 22:18:30,518 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:30,518 INFO [M:0;jenkins-hbase4:34283] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34283,1689200280641; zookeeper connection closed. 2023-07-12 22:18:30,519 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): master:34283-0x1015b9d43b70000, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:30,521 WARN [Listener at localhost/40739] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 22:18:30,525 INFO [Listener at localhost/40739] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 22:18:30,631 WARN [BP-807763544-172.31.14.131-1689200277053 heartbeating to localhost/127.0.0.1:40075] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 22:18:30,631 WARN [BP-807763544-172.31.14.131-1689200277053 heartbeating to localhost/127.0.0.1:40075] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-807763544-172.31.14.131-1689200277053 (Datanode Uuid def65813-8de1-44de-b3c6-d7857e827dfd) service to localhost/127.0.0.1:40075 2023-07-12 22:18:30,633 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6/dfs/data/data5/current/BP-807763544-172.31.14.131-1689200277053] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:30,633 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6/dfs/data/data6/current/BP-807763544-172.31.14.131-1689200277053] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:30,635 WARN [Listener at localhost/40739] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 22:18:30,642 INFO [Listener at localhost/40739] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 22:18:30,752 WARN [BP-807763544-172.31.14.131-1689200277053 heartbeating to localhost/127.0.0.1:40075] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 22:18:30,752 WARN [BP-807763544-172.31.14.131-1689200277053 heartbeating to localhost/127.0.0.1:40075] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-807763544-172.31.14.131-1689200277053 (Datanode Uuid b163f286-5e2c-4ef9-a39b-a7ac5c79bf2e) service to localhost/127.0.0.1:40075 2023-07-12 22:18:30,753 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6/dfs/data/data3/current/BP-807763544-172.31.14.131-1689200277053] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:30,753 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6/dfs/data/data4/current/BP-807763544-172.31.14.131-1689200277053] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:30,755 WARN [Listener at localhost/40739] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 22:18:30,764 INFO [Listener at localhost/40739] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 22:18:30,777 WARN [BP-807763544-172.31.14.131-1689200277053 heartbeating to localhost/127.0.0.1:40075] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 22:18:30,777 WARN [BP-807763544-172.31.14.131-1689200277053 heartbeating to localhost/127.0.0.1:40075] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-807763544-172.31.14.131-1689200277053 (Datanode Uuid dab512fc-edd3-4a5f-9792-f93a593c97dd) service to localhost/127.0.0.1:40075 2023-07-12 22:18:30,782 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6/dfs/data/data2/current/BP-807763544-172.31.14.131-1689200277053] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:30,782 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/cluster_a27aa5c3-7c2c-58b8-f3ec-fb80c425fbe6/dfs/data/data1/current/BP-807763544-172.31.14.131-1689200277053] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:30,813 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 22:18:30,813 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 22:18:30,813 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 22:18:30,824 INFO [Listener at localhost/40739] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 22:18:30,848 INFO [Listener at localhost/40739] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 22:18:30,910 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-12 22:18:30,910 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 22:18:30,910 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.log.dir so I do NOT create it in target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed 2023-07-12 22:18:30,910 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1907f92e-990f-cac8-d54e-ae50e418b729/hadoop.tmp.dir so I do NOT create it in target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed 2023-07-12 22:18:30,911 INFO [Listener at localhost/40739] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/cluster_bd747c73-1dcf-47ea-28ca-7c9c5b338765, deleteOnExit=true 2023-07-12 22:18:30,911 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 22:18:30,911 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/test.cache.data in system properties and HBase conf 2023-07-12 22:18:30,911 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 22:18:30,911 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/hadoop.log.dir in system properties and HBase conf 2023-07-12 22:18:30,912 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 22:18:30,912 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 22:18:30,912 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 22:18:30,912 DEBUG [Listener at localhost/40739] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 22:18:30,912 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 22:18:30,913 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 22:18:30,913 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 22:18:30,913 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 22:18:30,913 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 22:18:30,913 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 22:18:30,913 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 22:18:30,914 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 22:18:30,914 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 22:18:30,914 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/nfs.dump.dir in system properties and HBase conf 2023-07-12 22:18:30,914 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/java.io.tmpdir in system properties and HBase conf 2023-07-12 22:18:30,914 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 22:18:30,914 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 22:18:30,915 INFO [Listener at localhost/40739] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 22:18:30,919 WARN [Listener at localhost/40739] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 22:18:30,919 WARN [Listener at localhost/40739] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 22:18:30,943 DEBUG [Listener at localhost/40739-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1015b9d43b7000a, quorum=127.0.0.1:59420, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-12 22:18:30,952 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1015b9d43b7000a, quorum=127.0.0.1:59420, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-12 22:18:30,985 WARN [Listener at localhost/40739] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 22:18:30,988 INFO [Listener at localhost/40739] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 22:18:31,000 INFO [Listener at localhost/40739] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/java.io.tmpdir/Jetty_localhost_39521_hdfs____pp4xq0/webapp 2023-07-12 22:18:31,106 INFO [Listener at localhost/40739] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39521 2023-07-12 22:18:31,111 WARN [Listener at localhost/40739] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 22:18:31,111 WARN [Listener at localhost/40739] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 22:18:31,161 WARN [Listener at localhost/33559] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 22:18:31,197 WARN [Listener at localhost/33559] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 22:18:31,203 WARN [Listener at localhost/33559] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 22:18:31,204 INFO [Listener at localhost/33559] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 22:18:31,216 INFO [Listener at localhost/33559] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/java.io.tmpdir/Jetty_localhost_35767_datanode____.w3h477/webapp 2023-07-12 22:18:31,324 INFO [Listener at localhost/33559] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35767 2023-07-12 22:18:31,338 WARN [Listener at localhost/41117] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 22:18:31,356 WARN [Listener at localhost/41117] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 22:18:31,358 WARN [Listener at localhost/41117] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 22:18:31,402 INFO [Listener at localhost/41117] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 22:18:31,407 INFO [Listener at localhost/41117] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/java.io.tmpdir/Jetty_localhost_45729_datanode____e0v7yc/webapp 2023-07-12 22:18:31,497 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x23c0a3a7eab2a729: Processing first storage report for DS-01589847-8470-4208-b641-8ff00a5898c0 from datanode 86d76ca0-e194-4967-9c13-2bc54c8e3faa 2023-07-12 22:18:31,497 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x23c0a3a7eab2a729: from storage DS-01589847-8470-4208-b641-8ff00a5898c0 node DatanodeRegistration(127.0.0.1:34159, datanodeUuid=86d76ca0-e194-4967-9c13-2bc54c8e3faa, infoPort=41833, infoSecurePort=0, ipcPort=41117, storageInfo=lv=-57;cid=testClusterID;nsid=221504216;c=1689200310922), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 22:18:31,497 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x23c0a3a7eab2a729: Processing first storage report for DS-a72a6577-b5c2-4cf1-86aa-9a7a3fc1c904 from datanode 86d76ca0-e194-4967-9c13-2bc54c8e3faa 2023-07-12 22:18:31,497 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x23c0a3a7eab2a729: from storage DS-a72a6577-b5c2-4cf1-86aa-9a7a3fc1c904 node DatanodeRegistration(127.0.0.1:34159, datanodeUuid=86d76ca0-e194-4967-9c13-2bc54c8e3faa, infoPort=41833, infoSecurePort=0, ipcPort=41117, storageInfo=lv=-57;cid=testClusterID;nsid=221504216;c=1689200310922), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 22:18:31,523 INFO [Listener at localhost/41117] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45729 2023-07-12 22:18:31,533 WARN [Listener at localhost/35261] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 22:18:31,557 WARN [Listener at localhost/35261] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-12 22:18:31,625 WARN [Listener at localhost/35261] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 22:18:31,628 WARN [Listener at localhost/35261] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 22:18:31,629 INFO [Listener at localhost/35261] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 22:18:31,632 INFO [Listener at localhost/35261] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/java.io.tmpdir/Jetty_localhost_38447_datanode____qk6zh5/webapp 2023-07-12 22:18:31,671 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7e79037d7e2f7762: Processing first storage report for DS-09640063-8815-4067-8639-c94737342f71 from datanode d08f1db3-63cc-4032-9838-8e7a1398b55c 2023-07-12 22:18:31,672 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7e79037d7e2f7762: from storage DS-09640063-8815-4067-8639-c94737342f71 node DatanodeRegistration(127.0.0.1:43507, datanodeUuid=d08f1db3-63cc-4032-9838-8e7a1398b55c, infoPort=34335, infoSecurePort=0, ipcPort=35261, storageInfo=lv=-57;cid=testClusterID;nsid=221504216;c=1689200310922), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 22:18:31,672 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7e79037d7e2f7762: Processing first storage report for DS-9a251912-4946-4932-857a-5113fdad2bb7 from datanode d08f1db3-63cc-4032-9838-8e7a1398b55c 2023-07-12 22:18:31,672 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7e79037d7e2f7762: from storage DS-9a251912-4946-4932-857a-5113fdad2bb7 node DatanodeRegistration(127.0.0.1:43507, datanodeUuid=d08f1db3-63cc-4032-9838-8e7a1398b55c, infoPort=34335, infoSecurePort=0, ipcPort=35261, storageInfo=lv=-57;cid=testClusterID;nsid=221504216;c=1689200310922), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 22:18:31,732 INFO [Listener at localhost/35261] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38447 2023-07-12 22:18:31,739 WARN [Listener at localhost/43911] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 22:18:31,835 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6f8435a635e89c50: Processing first storage report for DS-fb92fad0-7b00-44a0-b0b1-7f9e4aaeb3cd from datanode 140807b8-72d4-4169-9915-26353847a3a6 2023-07-12 22:18:31,835 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6f8435a635e89c50: from storage DS-fb92fad0-7b00-44a0-b0b1-7f9e4aaeb3cd node DatanodeRegistration(127.0.0.1:45365, datanodeUuid=140807b8-72d4-4169-9915-26353847a3a6, infoPort=43147, infoSecurePort=0, ipcPort=43911, storageInfo=lv=-57;cid=testClusterID;nsid=221504216;c=1689200310922), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 22:18:31,835 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6f8435a635e89c50: Processing first storage report for DS-de22ab44-8e7f-44e9-8348-df30756c5ded from datanode 140807b8-72d4-4169-9915-26353847a3a6 2023-07-12 22:18:31,835 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6f8435a635e89c50: from storage DS-de22ab44-8e7f-44e9-8348-df30756c5ded node DatanodeRegistration(127.0.0.1:45365, datanodeUuid=140807b8-72d4-4169-9915-26353847a3a6, infoPort=43147, infoSecurePort=0, ipcPort=43911, storageInfo=lv=-57;cid=testClusterID;nsid=221504216;c=1689200310922), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 22:18:31,848 DEBUG [Listener at localhost/43911] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed 2023-07-12 22:18:31,851 INFO [Listener at localhost/43911] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/cluster_bd747c73-1dcf-47ea-28ca-7c9c5b338765/zookeeper_0, clientPort=54162, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/cluster_bd747c73-1dcf-47ea-28ca-7c9c5b338765/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/cluster_bd747c73-1dcf-47ea-28ca-7c9c5b338765/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 22:18:31,852 INFO [Listener at localhost/43911] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54162 2023-07-12 22:18:31,853 INFO [Listener at localhost/43911] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:31,853 INFO [Listener at localhost/43911] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:31,876 INFO [Listener at localhost/43911] util.FSUtils(471): Created version file at hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6 with version=8 2023-07-12 22:18:31,876 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/hbase-staging 2023-07-12 22:18:31,877 DEBUG [Listener at localhost/43911] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 22:18:31,877 DEBUG [Listener at localhost/43911] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 22:18:31,877 DEBUG [Listener at localhost/43911] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 22:18:31,877 DEBUG [Listener at localhost/43911] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 22:18:31,878 INFO [Listener at localhost/43911] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:31,878 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:31,878 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:31,878 INFO [Listener at localhost/43911] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:31,878 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:31,879 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:31,879 INFO [Listener at localhost/43911] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:31,879 INFO [Listener at localhost/43911] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39075 2023-07-12 22:18:31,880 INFO [Listener at localhost/43911] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:31,881 INFO [Listener at localhost/43911] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:31,882 INFO [Listener at localhost/43911] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39075 connecting to ZooKeeper ensemble=127.0.0.1:54162 2023-07-12 22:18:31,889 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:390750x0, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:31,890 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39075-0x1015b9dc12d0000 connected 2023-07-12 22:18:31,907 DEBUG [Listener at localhost/43911] zookeeper.ZKUtil(164): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:31,908 DEBUG [Listener at localhost/43911] zookeeper.ZKUtil(164): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:31,908 DEBUG [Listener at localhost/43911] zookeeper.ZKUtil(164): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:31,910 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39075 2023-07-12 22:18:31,911 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39075 2023-07-12 22:18:31,911 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39075 2023-07-12 22:18:31,914 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39075 2023-07-12 22:18:31,914 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39075 2023-07-12 22:18:31,916 INFO [Listener at localhost/43911] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:31,916 INFO [Listener at localhost/43911] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:31,916 INFO [Listener at localhost/43911] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:31,917 INFO [Listener at localhost/43911] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 22:18:31,917 INFO [Listener at localhost/43911] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:31,917 INFO [Listener at localhost/43911] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:31,917 INFO [Listener at localhost/43911] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:31,917 INFO [Listener at localhost/43911] http.HttpServer(1146): Jetty bound to port 34971 2023-07-12 22:18:31,918 INFO [Listener at localhost/43911] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:31,920 INFO [Listener at localhost/43911] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:31,920 INFO [Listener at localhost/43911] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2e75a497{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:31,920 INFO [Listener at localhost/43911] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:31,921 INFO [Listener at localhost/43911] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5922bff8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:32,044 INFO [Listener at localhost/43911] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:32,045 INFO [Listener at localhost/43911] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:32,045 INFO [Listener at localhost/43911] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:32,046 INFO [Listener at localhost/43911] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 22:18:32,047 INFO [Listener at localhost/43911] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:32,048 INFO [Listener at localhost/43911] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4a620ab{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/java.io.tmpdir/jetty-0_0_0_0-34971-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7459294545751246/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 22:18:32,050 INFO [Listener at localhost/43911] server.AbstractConnector(333): Started ServerConnector@77c386e4{HTTP/1.1, (http/1.1)}{0.0.0.0:34971} 2023-07-12 22:18:32,050 INFO [Listener at localhost/43911] server.Server(415): Started @37123ms 2023-07-12 22:18:32,050 INFO [Listener at localhost/43911] master.HMaster(444): hbase.rootdir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6, hbase.cluster.distributed=false 2023-07-12 22:18:32,070 INFO [Listener at localhost/43911] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:32,071 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:32,071 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:32,071 INFO [Listener at localhost/43911] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:32,071 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:32,071 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:32,071 INFO [Listener at localhost/43911] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:32,072 INFO [Listener at localhost/43911] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41315 2023-07-12 22:18:32,073 INFO [Listener at localhost/43911] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 22:18:32,074 DEBUG [Listener at localhost/43911] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 22:18:32,075 INFO [Listener at localhost/43911] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:32,076 INFO [Listener at localhost/43911] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:32,078 INFO [Listener at localhost/43911] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41315 connecting to ZooKeeper ensemble=127.0.0.1:54162 2023-07-12 22:18:32,083 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:413150x0, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:32,084 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41315-0x1015b9dc12d0001 connected 2023-07-12 22:18:32,084 DEBUG [Listener at localhost/43911] zookeeper.ZKUtil(164): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:32,085 DEBUG [Listener at localhost/43911] zookeeper.ZKUtil(164): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:32,086 DEBUG [Listener at localhost/43911] zookeeper.ZKUtil(164): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:32,086 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41315 2023-07-12 22:18:32,089 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41315 2023-07-12 22:18:32,090 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41315 2023-07-12 22:18:32,099 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41315 2023-07-12 22:18:32,099 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41315 2023-07-12 22:18:32,102 INFO [Listener at localhost/43911] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:32,102 INFO [Listener at localhost/43911] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:32,102 INFO [Listener at localhost/43911] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:32,103 INFO [Listener at localhost/43911] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 22:18:32,103 INFO [Listener at localhost/43911] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:32,103 INFO [Listener at localhost/43911] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:32,104 INFO [Listener at localhost/43911] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:32,105 INFO [Listener at localhost/43911] http.HttpServer(1146): Jetty bound to port 35071 2023-07-12 22:18:32,105 INFO [Listener at localhost/43911] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:32,111 INFO [Listener at localhost/43911] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:32,111 INFO [Listener at localhost/43911] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@14dd322{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:32,112 INFO [Listener at localhost/43911] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:32,112 INFO [Listener at localhost/43911] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6703a120{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:32,240 INFO [Listener at localhost/43911] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:32,241 INFO [Listener at localhost/43911] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:32,241 INFO [Listener at localhost/43911] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:32,242 INFO [Listener at localhost/43911] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 22:18:32,242 INFO [Listener at localhost/43911] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:32,243 INFO [Listener at localhost/43911] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4cd8638e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/java.io.tmpdir/jetty-0_0_0_0-35071-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3410357358995000079/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:32,244 INFO [Listener at localhost/43911] server.AbstractConnector(333): Started ServerConnector@20156020{HTTP/1.1, (http/1.1)}{0.0.0.0:35071} 2023-07-12 22:18:32,245 INFO [Listener at localhost/43911] server.Server(415): Started @37317ms 2023-07-12 22:18:32,256 INFO [Listener at localhost/43911] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:32,256 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:32,256 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:32,256 INFO [Listener at localhost/43911] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:32,256 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:32,256 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:32,256 INFO [Listener at localhost/43911] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:32,259 INFO [Listener at localhost/43911] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42757 2023-07-12 22:18:32,259 INFO [Listener at localhost/43911] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 22:18:32,260 DEBUG [Listener at localhost/43911] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 22:18:32,261 INFO [Listener at localhost/43911] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:32,262 INFO [Listener at localhost/43911] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:32,263 INFO [Listener at localhost/43911] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42757 connecting to ZooKeeper ensemble=127.0.0.1:54162 2023-07-12 22:18:32,267 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:427570x0, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:32,269 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42757-0x1015b9dc12d0002 connected 2023-07-12 22:18:32,269 DEBUG [Listener at localhost/43911] zookeeper.ZKUtil(164): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:32,270 DEBUG [Listener at localhost/43911] zookeeper.ZKUtil(164): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:32,270 DEBUG [Listener at localhost/43911] zookeeper.ZKUtil(164): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:32,270 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42757 2023-07-12 22:18:32,271 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42757 2023-07-12 22:18:32,274 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42757 2023-07-12 22:18:32,275 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42757 2023-07-12 22:18:32,278 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42757 2023-07-12 22:18:32,280 INFO [Listener at localhost/43911] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:32,280 INFO [Listener at localhost/43911] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:32,280 INFO [Listener at localhost/43911] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:32,280 INFO [Listener at localhost/43911] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 22:18:32,281 INFO [Listener at localhost/43911] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:32,281 INFO [Listener at localhost/43911] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:32,281 INFO [Listener at localhost/43911] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:32,281 INFO [Listener at localhost/43911] http.HttpServer(1146): Jetty bound to port 38617 2023-07-12 22:18:32,281 INFO [Listener at localhost/43911] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:32,285 INFO [Listener at localhost/43911] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:32,285 INFO [Listener at localhost/43911] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7b6d1808{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:32,286 INFO [Listener at localhost/43911] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:32,286 INFO [Listener at localhost/43911] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@601eee72{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:32,400 INFO [Listener at localhost/43911] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:32,401 INFO [Listener at localhost/43911] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:32,401 INFO [Listener at localhost/43911] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:32,401 INFO [Listener at localhost/43911] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 22:18:32,402 INFO [Listener at localhost/43911] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:32,403 INFO [Listener at localhost/43911] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@20f82403{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/java.io.tmpdir/jetty-0_0_0_0-38617-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6419522009306007548/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:32,404 INFO [Listener at localhost/43911] server.AbstractConnector(333): Started ServerConnector@614ff1bd{HTTP/1.1, (http/1.1)}{0.0.0.0:38617} 2023-07-12 22:18:32,404 INFO [Listener at localhost/43911] server.Server(415): Started @37477ms 2023-07-12 22:18:32,416 INFO [Listener at localhost/43911] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:32,416 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:32,416 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:32,416 INFO [Listener at localhost/43911] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:32,416 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:32,416 INFO [Listener at localhost/43911] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:32,416 INFO [Listener at localhost/43911] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:32,417 INFO [Listener at localhost/43911] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33227 2023-07-12 22:18:32,418 INFO [Listener at localhost/43911] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 22:18:32,419 DEBUG [Listener at localhost/43911] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 22:18:32,419 INFO [Listener at localhost/43911] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:32,420 INFO [Listener at localhost/43911] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:32,421 INFO [Listener at localhost/43911] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33227 connecting to ZooKeeper ensemble=127.0.0.1:54162 2023-07-12 22:18:32,425 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:332270x0, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:32,426 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33227-0x1015b9dc12d0003 connected 2023-07-12 22:18:32,426 DEBUG [Listener at localhost/43911] zookeeper.ZKUtil(164): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:32,427 DEBUG [Listener at localhost/43911] zookeeper.ZKUtil(164): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:32,427 DEBUG [Listener at localhost/43911] zookeeper.ZKUtil(164): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:32,428 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33227 2023-07-12 22:18:32,429 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33227 2023-07-12 22:18:32,429 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33227 2023-07-12 22:18:32,430 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33227 2023-07-12 22:18:32,431 DEBUG [Listener at localhost/43911] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33227 2023-07-12 22:18:32,433 INFO [Listener at localhost/43911] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:32,433 INFO [Listener at localhost/43911] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:32,433 INFO [Listener at localhost/43911] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:32,433 INFO [Listener at localhost/43911] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 22:18:32,434 INFO [Listener at localhost/43911] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:32,434 INFO [Listener at localhost/43911] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:32,434 INFO [Listener at localhost/43911] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:32,435 INFO [Listener at localhost/43911] http.HttpServer(1146): Jetty bound to port 40793 2023-07-12 22:18:32,435 INFO [Listener at localhost/43911] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:32,437 INFO [Listener at localhost/43911] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:32,437 INFO [Listener at localhost/43911] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6a2ebbcc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:32,438 INFO [Listener at localhost/43911] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:32,438 INFO [Listener at localhost/43911] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d986ee8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:32,558 INFO [Listener at localhost/43911] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:32,559 INFO [Listener at localhost/43911] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:32,559 INFO [Listener at localhost/43911] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:32,560 INFO [Listener at localhost/43911] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 22:18:32,560 INFO [Listener at localhost/43911] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:32,561 INFO [Listener at localhost/43911] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2a4e8813{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/java.io.tmpdir/jetty-0_0_0_0-40793-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6184977920359712727/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:32,563 INFO [Listener at localhost/43911] server.AbstractConnector(333): Started ServerConnector@50ea7397{HTTP/1.1, (http/1.1)}{0.0.0.0:40793} 2023-07-12 22:18:32,563 INFO [Listener at localhost/43911] server.Server(415): Started @37636ms 2023-07-12 22:18:32,565 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:32,571 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@4c2c88b8{HTTP/1.1, (http/1.1)}{0.0.0.0:37129} 2023-07-12 22:18:32,572 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37644ms 2023-07-12 22:18:32,572 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,39075,1689200311877 2023-07-12 22:18:32,573 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 22:18:32,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,39075,1689200311877 2023-07-12 22:18:32,576 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:32,576 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:32,576 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:32,576 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:32,577 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:32,578 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 22:18:32,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,39075,1689200311877 from backup master directory 2023-07-12 22:18:32,580 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 22:18:32,581 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,39075,1689200311877 2023-07-12 22:18:32,581 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:32,581 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 22:18:32,581 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,39075,1689200311877 2023-07-12 22:18:32,603 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/hbase.id with ID: 2a22c165-b1ae-412d-a7d4-c45ee727367f 2023-07-12 22:18:32,619 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:32,621 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:32,653 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x48d56ddd to 127.0.0.1:54162 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:32,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71a8449b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:32,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:32,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 22:18:32,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:32,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/MasterData/data/master/store-tmp 2023-07-12 22:18:32,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:32,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 22:18:32,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:32,683 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:32,683 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 22:18:32,683 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:32,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:32,683 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 22:18:32,684 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/MasterData/WALs/jenkins-hbase4.apache.org,39075,1689200311877 2023-07-12 22:18:32,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39075%2C1689200311877, suffix=, logDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/MasterData/WALs/jenkins-hbase4.apache.org,39075,1689200311877, archiveDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/MasterData/oldWALs, maxLogs=10 2023-07-12 22:18:32,704 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45365,DS-fb92fad0-7b00-44a0-b0b1-7f9e4aaeb3cd,DISK] 2023-07-12 22:18:32,704 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34159,DS-01589847-8470-4208-b641-8ff00a5898c0,DISK] 2023-07-12 22:18:32,706 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43507,DS-09640063-8815-4067-8639-c94737342f71,DISK] 2023-07-12 22:18:32,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/MasterData/WALs/jenkins-hbase4.apache.org,39075,1689200311877/jenkins-hbase4.apache.org%2C39075%2C1689200311877.1689200312687 2023-07-12 22:18:32,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34159,DS-01589847-8470-4208-b641-8ff00a5898c0,DISK], DatanodeInfoWithStorage[127.0.0.1:43507,DS-09640063-8815-4067-8639-c94737342f71,DISK], DatanodeInfoWithStorage[127.0.0.1:45365,DS-fb92fad0-7b00-44a0-b0b1-7f9e4aaeb3cd,DISK]] 2023-07-12 22:18:32,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:32,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:32,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:32,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:32,717 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:32,719 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 22:18:32,719 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 22:18:32,720 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:32,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:32,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:32,727 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:32,731 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:32,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10523920320, jitterRate=-0.019883543252944946}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:32,731 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 22:18:32,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 22:18:32,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 22:18:32,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 22:18:32,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 22:18:32,736 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 22:18:32,736 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-12 22:18:32,736 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 22:18:32,738 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 22:18:32,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 22:18:32,740 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 22:18:32,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 22:18:32,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 22:18:32,746 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:32,746 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 22:18:32,747 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 22:18:32,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 22:18:32,749 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:32,749 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:32,749 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:32,749 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:32,749 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:32,750 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,39075,1689200311877, sessionid=0x1015b9dc12d0000, setting cluster-up flag (Was=false) 2023-07-12 22:18:32,755 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:32,759 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 22:18:32,760 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39075,1689200311877 2023-07-12 22:18:32,765 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:32,769 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 22:18:32,770 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39075,1689200311877 2023-07-12 22:18:32,771 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.hbase-snapshot/.tmp 2023-07-12 22:18:32,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 22:18:32,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 22:18:32,773 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 22:18:32,774 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39075,1689200311877] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:32,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 22:18:32,775 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-12 22:18:32,776 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 22:18:32,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 22:18:32,792 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 22:18:32,792 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 22:18:32,792 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 22:18:32,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 22:18:32,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 22:18:32,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 22:18:32,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 22:18:32,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-12 22:18:32,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:32,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689200342794 2023-07-12 22:18:32,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 22:18:32,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 22:18:32,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 22:18:32,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 22:18:32,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 22:18:32,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 22:18:32,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 22:18:32,796 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 22:18:32,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 22:18:32,796 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 22:18:32,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 22:18:32,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 22:18:32,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 22:18:32,797 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689200312796,5,FailOnTimeoutGroup] 2023-07-12 22:18:32,797 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:32,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689200312797,5,FailOnTimeoutGroup] 2023-07-12 22:18:32,798 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,798 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 22:18:32,798 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,798 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,813 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:32,814 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:32,814 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6 2023-07-12 22:18:32,823 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:32,827 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 22:18:32,828 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/info 2023-07-12 22:18:32,828 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 22:18:32,829 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:32,829 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 22:18:32,830 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/rep_barrier 2023-07-12 22:18:32,831 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 22:18:32,831 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:32,831 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 22:18:32,833 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/table 2023-07-12 22:18:32,833 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 22:18:32,834 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:32,834 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740 2023-07-12 22:18:32,835 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740 2023-07-12 22:18:32,837 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 22:18:32,838 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 22:18:32,839 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:32,840 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11798805440, jitterRate=0.09884938597679138}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 22:18:32,840 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 22:18:32,840 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 22:18:32,840 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 22:18:32,840 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 22:18:32,840 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 22:18:32,840 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 22:18:32,841 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 22:18:32,841 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 22:18:32,842 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 22:18:32,842 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 22:18:32,842 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 22:18:32,843 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 22:18:32,845 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 22:18:32,866 INFO [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(951): ClusterId : 2a22c165-b1ae-412d-a7d4-c45ee727367f 2023-07-12 22:18:32,866 INFO [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(951): ClusterId : 2a22c165-b1ae-412d-a7d4-c45ee727367f 2023-07-12 22:18:32,867 INFO [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(951): ClusterId : 2a22c165-b1ae-412d-a7d4-c45ee727367f 2023-07-12 22:18:32,868 DEBUG [RS:1;jenkins-hbase4:42757] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 22:18:32,868 DEBUG [RS:2;jenkins-hbase4:33227] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 22:18:32,869 DEBUG [RS:0;jenkins-hbase4:41315] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 22:18:32,872 DEBUG [RS:0;jenkins-hbase4:41315] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 22:18:32,872 DEBUG [RS:0;jenkins-hbase4:41315] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 22:18:32,872 DEBUG [RS:1;jenkins-hbase4:42757] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 22:18:32,872 DEBUG [RS:1;jenkins-hbase4:42757] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 22:18:32,872 DEBUG [RS:2;jenkins-hbase4:33227] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 22:18:32,872 DEBUG [RS:2;jenkins-hbase4:33227] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 22:18:32,874 DEBUG [RS:0;jenkins-hbase4:41315] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 22:18:32,876 DEBUG [RS:1;jenkins-hbase4:42757] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 22:18:32,876 DEBUG [RS:2;jenkins-hbase4:33227] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 22:18:32,877 DEBUG [RS:0;jenkins-hbase4:41315] zookeeper.ReadOnlyZKClient(139): Connect 0x70bf8c7e to 127.0.0.1:54162 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:32,878 DEBUG [RS:1;jenkins-hbase4:42757] zookeeper.ReadOnlyZKClient(139): Connect 0x37991f31 to 127.0.0.1:54162 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:32,878 DEBUG [RS:2;jenkins-hbase4:33227] zookeeper.ReadOnlyZKClient(139): Connect 0x033689d1 to 127.0.0.1:54162 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:32,891 DEBUG [RS:1;jenkins-hbase4:42757] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@395ef7b2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:32,891 DEBUG [RS:0;jenkins-hbase4:41315] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7e787c19, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:32,892 DEBUG [RS:1;jenkins-hbase4:42757] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7863fd89, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:32,892 DEBUG [RS:0;jenkins-hbase4:41315] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3f457622, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:32,892 DEBUG [RS:2;jenkins-hbase4:33227] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@9cee2a1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:32,893 DEBUG [RS:2;jenkins-hbase4:33227] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ccc7dfc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:32,901 DEBUG [RS:1;jenkins-hbase4:42757] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:42757 2023-07-12 22:18:32,901 INFO [RS:1;jenkins-hbase4:42757] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 22:18:32,901 INFO [RS:1;jenkins-hbase4:42757] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 22:18:32,901 DEBUG [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 22:18:32,902 INFO [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39075,1689200311877 with isa=jenkins-hbase4.apache.org/172.31.14.131:42757, startcode=1689200312255 2023-07-12 22:18:32,902 DEBUG [RS:1;jenkins-hbase4:42757] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 22:18:32,903 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48853, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 22:18:32,905 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39075] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42757,1689200312255 2023-07-12 22:18:32,905 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39075,1689200311877] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:32,905 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39075,1689200311877] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 22:18:32,905 DEBUG [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6 2023-07-12 22:18:32,906 DEBUG [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33559 2023-07-12 22:18:32,906 DEBUG [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34971 2023-07-12 22:18:32,906 DEBUG [RS:2;jenkins-hbase4:33227] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:33227 2023-07-12 22:18:32,906 DEBUG [RS:0;jenkins-hbase4:41315] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:41315 2023-07-12 22:18:32,906 INFO [RS:2;jenkins-hbase4:33227] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 22:18:32,906 INFO [RS:2;jenkins-hbase4:33227] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 22:18:32,906 INFO [RS:0;jenkins-hbase4:41315] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 22:18:32,906 INFO [RS:0;jenkins-hbase4:41315] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 22:18:32,906 DEBUG [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 22:18:32,906 DEBUG [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 22:18:32,907 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:32,907 INFO [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39075,1689200311877 with isa=jenkins-hbase4.apache.org/172.31.14.131:41315, startcode=1689200312070 2023-07-12 22:18:32,907 INFO [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39075,1689200311877 with isa=jenkins-hbase4.apache.org/172.31.14.131:33227, startcode=1689200312415 2023-07-12 22:18:32,907 DEBUG [RS:0;jenkins-hbase4:41315] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 22:18:32,908 DEBUG [RS:1;jenkins-hbase4:42757] zookeeper.ZKUtil(162): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42757,1689200312255 2023-07-12 22:18:32,908 DEBUG [RS:2;jenkins-hbase4:33227] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 22:18:32,908 WARN [RS:1;jenkins-hbase4:42757] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:32,908 INFO [RS:1;jenkins-hbase4:42757] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:32,908 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42757,1689200312255] 2023-07-12 22:18:32,908 DEBUG [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/WALs/jenkins-hbase4.apache.org,42757,1689200312255 2023-07-12 22:18:32,909 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34411, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 22:18:32,909 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39047, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 22:18:32,910 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39075] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:32,911 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39075,1689200311877] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:32,911 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39075,1689200311877] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 22:18:32,911 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39075] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:32,911 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39075,1689200311877] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:32,911 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39075,1689200311877] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 22:18:32,911 DEBUG [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6 2023-07-12 22:18:32,911 DEBUG [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33559 2023-07-12 22:18:32,911 DEBUG [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34971 2023-07-12 22:18:32,911 DEBUG [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6 2023-07-12 22:18:32,912 DEBUG [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33559 2023-07-12 22:18:32,912 DEBUG [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34971 2023-07-12 22:18:32,919 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:32,920 DEBUG [RS:2;jenkins-hbase4:33227] zookeeper.ZKUtil(162): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:32,920 WARN [RS:2;jenkins-hbase4:33227] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:32,920 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41315,1689200312070] 2023-07-12 22:18:32,920 INFO [RS:2;jenkins-hbase4:33227] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:32,920 DEBUG [RS:0;jenkins-hbase4:41315] zookeeper.ZKUtil(162): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:32,920 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33227,1689200312415] 2023-07-12 22:18:32,920 WARN [RS:0;jenkins-hbase4:41315] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:32,920 DEBUG [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/WALs/jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:32,920 INFO [RS:0;jenkins-hbase4:41315] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:32,920 DEBUG [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/WALs/jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:32,920 DEBUG [RS:1;jenkins-hbase4:42757] zookeeper.ZKUtil(162): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:32,921 DEBUG [RS:1;jenkins-hbase4:42757] zookeeper.ZKUtil(162): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42757,1689200312255 2023-07-12 22:18:32,923 DEBUG [RS:1;jenkins-hbase4:42757] zookeeper.ZKUtil(162): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:32,925 DEBUG [RS:1;jenkins-hbase4:42757] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 22:18:32,925 INFO [RS:1;jenkins-hbase4:42757] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 22:18:32,926 DEBUG [RS:2;jenkins-hbase4:33227] zookeeper.ZKUtil(162): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:32,926 DEBUG [RS:0;jenkins-hbase4:41315] zookeeper.ZKUtil(162): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:32,926 DEBUG [RS:0;jenkins-hbase4:41315] zookeeper.ZKUtil(162): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42757,1689200312255 2023-07-12 22:18:32,926 DEBUG [RS:2;jenkins-hbase4:33227] zookeeper.ZKUtil(162): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42757,1689200312255 2023-07-12 22:18:32,927 DEBUG [RS:0;jenkins-hbase4:41315] zookeeper.ZKUtil(162): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:32,927 DEBUG [RS:2;jenkins-hbase4:33227] zookeeper.ZKUtil(162): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:32,927 INFO [RS:1;jenkins-hbase4:42757] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 22:18:32,927 INFO [RS:1;jenkins-hbase4:42757] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 22:18:32,927 INFO [RS:1;jenkins-hbase4:42757] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,928 INFO [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 22:18:32,928 DEBUG [RS:2;jenkins-hbase4:33227] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 22:18:32,928 DEBUG [RS:0;jenkins-hbase4:41315] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 22:18:32,928 INFO [RS:2;jenkins-hbase4:33227] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 22:18:32,929 INFO [RS:0;jenkins-hbase4:41315] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 22:18:32,929 INFO [RS:1;jenkins-hbase4:42757] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,930 DEBUG [RS:1;jenkins-hbase4:42757] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,930 DEBUG [RS:1;jenkins-hbase4:42757] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,930 DEBUG [RS:1;jenkins-hbase4:42757] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,930 DEBUG [RS:1;jenkins-hbase4:42757] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,930 DEBUG [RS:1;jenkins-hbase4:42757] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,930 DEBUG [RS:1;jenkins-hbase4:42757] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:32,930 DEBUG [RS:1;jenkins-hbase4:42757] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,930 DEBUG [RS:1;jenkins-hbase4:42757] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,930 INFO [RS:2;jenkins-hbase4:33227] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 22:18:32,931 DEBUG [RS:1;jenkins-hbase4:42757] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,931 DEBUG [RS:1;jenkins-hbase4:42757] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,935 INFO [RS:2;jenkins-hbase4:33227] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 22:18:32,935 INFO [RS:0;jenkins-hbase4:41315] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 22:18:32,935 INFO [RS:2;jenkins-hbase4:33227] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,935 INFO [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 22:18:32,938 INFO [RS:1;jenkins-hbase4:42757] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,939 INFO [RS:1;jenkins-hbase4:42757] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,939 INFO [RS:0;jenkins-hbase4:41315] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 22:18:32,939 INFO [RS:1;jenkins-hbase4:42757] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,939 INFO [RS:0;jenkins-hbase4:41315] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,939 INFO [RS:1;jenkins-hbase4:42757] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,939 INFO [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 22:18:32,940 INFO [RS:2;jenkins-hbase4:33227] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,940 DEBUG [RS:2;jenkins-hbase4:33227] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,940 DEBUG [RS:2;jenkins-hbase4:33227] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,940 DEBUG [RS:2;jenkins-hbase4:33227] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,940 DEBUG [RS:2;jenkins-hbase4:33227] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,940 DEBUG [RS:2;jenkins-hbase4:33227] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,940 DEBUG [RS:2;jenkins-hbase4:33227] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:32,940 INFO [RS:0;jenkins-hbase4:41315] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,940 DEBUG [RS:2;jenkins-hbase4:33227] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,940 DEBUG [RS:2;jenkins-hbase4:33227] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,940 DEBUG [RS:0;jenkins-hbase4:41315] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,941 DEBUG [RS:2;jenkins-hbase4:33227] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,941 DEBUG [RS:0;jenkins-hbase4:41315] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,941 DEBUG [RS:2;jenkins-hbase4:33227] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,941 DEBUG [RS:0;jenkins-hbase4:41315] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,941 DEBUG [RS:0;jenkins-hbase4:41315] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,941 DEBUG [RS:0;jenkins-hbase4:41315] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,941 DEBUG [RS:0;jenkins-hbase4:41315] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:32,941 DEBUG [RS:0;jenkins-hbase4:41315] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,941 DEBUG [RS:0;jenkins-hbase4:41315] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,941 DEBUG [RS:0;jenkins-hbase4:41315] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,941 DEBUG [RS:0;jenkins-hbase4:41315] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:32,951 INFO [RS:2;jenkins-hbase4:33227] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,951 INFO [RS:2;jenkins-hbase4:33227] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,951 INFO [RS:2;jenkins-hbase4:33227] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,951 INFO [RS:2;jenkins-hbase4:33227] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,951 INFO [RS:0;jenkins-hbase4:41315] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,951 INFO [RS:0;jenkins-hbase4:41315] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,951 INFO [RS:0;jenkins-hbase4:41315] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,951 INFO [RS:0;jenkins-hbase4:41315] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,953 INFO [RS:1;jenkins-hbase4:42757] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 22:18:32,953 INFO [RS:1;jenkins-hbase4:42757] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42757,1689200312255-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,961 INFO [RS:2;jenkins-hbase4:33227] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 22:18:32,961 INFO [RS:2;jenkins-hbase4:33227] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33227,1689200312415-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,964 INFO [RS:1;jenkins-hbase4:42757] regionserver.Replication(203): jenkins-hbase4.apache.org,42757,1689200312255 started 2023-07-12 22:18:32,964 INFO [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42757,1689200312255, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42757, sessionid=0x1015b9dc12d0002 2023-07-12 22:18:32,964 INFO [RS:0;jenkins-hbase4:41315] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 22:18:32,964 DEBUG [RS:1;jenkins-hbase4:42757] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 22:18:32,964 INFO [RS:0;jenkins-hbase4:41315] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41315,1689200312070-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,964 DEBUG [RS:1;jenkins-hbase4:42757] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42757,1689200312255 2023-07-12 22:18:32,964 DEBUG [RS:1;jenkins-hbase4:42757] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42757,1689200312255' 2023-07-12 22:18:32,964 DEBUG [RS:1;jenkins-hbase4:42757] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 22:18:32,965 DEBUG [RS:1;jenkins-hbase4:42757] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 22:18:32,965 DEBUG [RS:1;jenkins-hbase4:42757] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 22:18:32,965 DEBUG [RS:1;jenkins-hbase4:42757] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 22:18:32,965 DEBUG [RS:1;jenkins-hbase4:42757] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42757,1689200312255 2023-07-12 22:18:32,965 DEBUG [RS:1;jenkins-hbase4:42757] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42757,1689200312255' 2023-07-12 22:18:32,965 DEBUG [RS:1;jenkins-hbase4:42757] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 22:18:32,966 DEBUG [RS:1;jenkins-hbase4:42757] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 22:18:32,966 DEBUG [RS:1;jenkins-hbase4:42757] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 22:18:32,966 INFO [RS:1;jenkins-hbase4:42757] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 22:18:32,969 INFO [RS:1;jenkins-hbase4:42757] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,970 DEBUG [RS:1;jenkins-hbase4:42757] zookeeper.ZKUtil(398): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 22:18:32,970 INFO [RS:1;jenkins-hbase4:42757] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 22:18:32,970 INFO [RS:1;jenkins-hbase4:42757] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,971 INFO [RS:1;jenkins-hbase4:42757] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,973 INFO [RS:2;jenkins-hbase4:33227] regionserver.Replication(203): jenkins-hbase4.apache.org,33227,1689200312415 started 2023-07-12 22:18:32,973 INFO [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33227,1689200312415, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33227, sessionid=0x1015b9dc12d0003 2023-07-12 22:18:32,973 DEBUG [RS:2;jenkins-hbase4:33227] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 22:18:32,973 DEBUG [RS:2;jenkins-hbase4:33227] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:32,973 DEBUG [RS:2;jenkins-hbase4:33227] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33227,1689200312415' 2023-07-12 22:18:32,973 DEBUG [RS:2;jenkins-hbase4:33227] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 22:18:32,974 DEBUG [RS:2;jenkins-hbase4:33227] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 22:18:32,974 DEBUG [RS:2;jenkins-hbase4:33227] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 22:18:32,974 DEBUG [RS:2;jenkins-hbase4:33227] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 22:18:32,974 DEBUG [RS:2;jenkins-hbase4:33227] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:32,974 DEBUG [RS:2;jenkins-hbase4:33227] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33227,1689200312415' 2023-07-12 22:18:32,974 DEBUG [RS:2;jenkins-hbase4:33227] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 22:18:32,975 DEBUG [RS:2;jenkins-hbase4:33227] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 22:18:32,975 DEBUG [RS:2;jenkins-hbase4:33227] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 22:18:32,975 INFO [RS:2;jenkins-hbase4:33227] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 22:18:32,975 INFO [RS:2;jenkins-hbase4:33227] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,975 DEBUG [RS:2;jenkins-hbase4:33227] zookeeper.ZKUtil(398): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 22:18:32,975 INFO [RS:2;jenkins-hbase4:33227] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 22:18:32,975 INFO [RS:2;jenkins-hbase4:33227] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,975 INFO [RS:2;jenkins-hbase4:33227] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,978 INFO [RS:0;jenkins-hbase4:41315] regionserver.Replication(203): jenkins-hbase4.apache.org,41315,1689200312070 started 2023-07-12 22:18:32,978 INFO [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41315,1689200312070, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41315, sessionid=0x1015b9dc12d0001 2023-07-12 22:18:32,979 DEBUG [RS:0;jenkins-hbase4:41315] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 22:18:32,979 DEBUG [RS:0;jenkins-hbase4:41315] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:32,979 DEBUG [RS:0;jenkins-hbase4:41315] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41315,1689200312070' 2023-07-12 22:18:32,979 DEBUG [RS:0;jenkins-hbase4:41315] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 22:18:32,979 DEBUG [RS:0;jenkins-hbase4:41315] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 22:18:32,979 DEBUG [RS:0;jenkins-hbase4:41315] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 22:18:32,979 DEBUG [RS:0;jenkins-hbase4:41315] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 22:18:32,979 DEBUG [RS:0;jenkins-hbase4:41315] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:32,980 DEBUG [RS:0;jenkins-hbase4:41315] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41315,1689200312070' 2023-07-12 22:18:32,980 DEBUG [RS:0;jenkins-hbase4:41315] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 22:18:32,980 DEBUG [RS:0;jenkins-hbase4:41315] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 22:18:32,980 DEBUG [RS:0;jenkins-hbase4:41315] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 22:18:32,980 INFO [RS:0;jenkins-hbase4:41315] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 22:18:32,980 INFO [RS:0;jenkins-hbase4:41315] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,981 DEBUG [RS:0;jenkins-hbase4:41315] zookeeper.ZKUtil(398): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 22:18:32,981 INFO [RS:0;jenkins-hbase4:41315] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 22:18:32,981 INFO [RS:0;jenkins-hbase4:41315] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,981 INFO [RS:0;jenkins-hbase4:41315] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:32,995 DEBUG [jenkins-hbase4:39075] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 22:18:32,995 DEBUG [jenkins-hbase4:39075] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:32,995 DEBUG [jenkins-hbase4:39075] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:32,995 DEBUG [jenkins-hbase4:39075] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:32,995 DEBUG [jenkins-hbase4:39075] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:32,995 DEBUG [jenkins-hbase4:39075] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:32,997 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42757,1689200312255, state=OPENING 2023-07-12 22:18:32,998 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 22:18:33,000 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:33,000 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42757,1689200312255}] 2023-07-12 22:18:33,000 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 22:18:33,074 INFO [RS:1;jenkins-hbase4:42757] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42757%2C1689200312255, suffix=, logDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/WALs/jenkins-hbase4.apache.org,42757,1689200312255, archiveDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/oldWALs, maxLogs=32 2023-07-12 22:18:33,077 INFO [RS:2;jenkins-hbase4:33227] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33227%2C1689200312415, suffix=, logDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/WALs/jenkins-hbase4.apache.org,33227,1689200312415, archiveDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/oldWALs, maxLogs=32 2023-07-12 22:18:33,083 INFO [RS:0;jenkins-hbase4:41315] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41315%2C1689200312070, suffix=, logDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/WALs/jenkins-hbase4.apache.org,41315,1689200312070, archiveDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/oldWALs, maxLogs=32 2023-07-12 22:18:33,086 WARN [ReadOnlyZKClient-127.0.0.1:54162@0x48d56ddd] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 22:18:33,087 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39075,1689200311877] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:33,090 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56052, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:33,100 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34159,DS-01589847-8470-4208-b641-8ff00a5898c0,DISK] 2023-07-12 22:18:33,101 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45365,DS-fb92fad0-7b00-44a0-b0b1-7f9e4aaeb3cd,DISK] 2023-07-12 22:18:33,101 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42757] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:56052 deadline: 1689200373099, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,42757,1689200312255 2023-07-12 22:18:33,101 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43507,DS-09640063-8815-4067-8639-c94737342f71,DISK] 2023-07-12 22:18:33,108 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45365,DS-fb92fad0-7b00-44a0-b0b1-7f9e4aaeb3cd,DISK] 2023-07-12 22:18:33,108 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43507,DS-09640063-8815-4067-8639-c94737342f71,DISK] 2023-07-12 22:18:33,108 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34159,DS-01589847-8470-4208-b641-8ff00a5898c0,DISK] 2023-07-12 22:18:33,109 INFO [RS:1;jenkins-hbase4:42757] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/WALs/jenkins-hbase4.apache.org,42757,1689200312255/jenkins-hbase4.apache.org%2C42757%2C1689200312255.1689200313075 2023-07-12 22:18:33,109 DEBUG [RS:1;jenkins-hbase4:42757] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34159,DS-01589847-8470-4208-b641-8ff00a5898c0,DISK], DatanodeInfoWithStorage[127.0.0.1:45365,DS-fb92fad0-7b00-44a0-b0b1-7f9e4aaeb3cd,DISK], DatanodeInfoWithStorage[127.0.0.1:43507,DS-09640063-8815-4067-8639-c94737342f71,DISK]] 2023-07-12 22:18:33,110 INFO [RS:2;jenkins-hbase4:33227] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/WALs/jenkins-hbase4.apache.org,33227,1689200312415/jenkins-hbase4.apache.org%2C33227%2C1689200312415.1689200313078 2023-07-12 22:18:33,114 DEBUG [RS:2;jenkins-hbase4:33227] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45365,DS-fb92fad0-7b00-44a0-b0b1-7f9e4aaeb3cd,DISK], DatanodeInfoWithStorage[127.0.0.1:34159,DS-01589847-8470-4208-b641-8ff00a5898c0,DISK], DatanodeInfoWithStorage[127.0.0.1:43507,DS-09640063-8815-4067-8639-c94737342f71,DISK]] 2023-07-12 22:18:33,118 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43507,DS-09640063-8815-4067-8639-c94737342f71,DISK] 2023-07-12 22:18:33,119 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45365,DS-fb92fad0-7b00-44a0-b0b1-7f9e4aaeb3cd,DISK] 2023-07-12 22:18:33,119 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34159,DS-01589847-8470-4208-b641-8ff00a5898c0,DISK] 2023-07-12 22:18:33,123 INFO [RS:0;jenkins-hbase4:41315] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/WALs/jenkins-hbase4.apache.org,41315,1689200312070/jenkins-hbase4.apache.org%2C41315%2C1689200312070.1689200313083 2023-07-12 22:18:33,123 DEBUG [RS:0;jenkins-hbase4:41315] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43507,DS-09640063-8815-4067-8639-c94737342f71,DISK], DatanodeInfoWithStorage[127.0.0.1:45365,DS-fb92fad0-7b00-44a0-b0b1-7f9e4aaeb3cd,DISK], DatanodeInfoWithStorage[127.0.0.1:34159,DS-01589847-8470-4208-b641-8ff00a5898c0,DISK]] 2023-07-12 22:18:33,154 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42757,1689200312255 2023-07-12 22:18:33,156 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:33,157 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56064, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:33,162 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 22:18:33,162 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:33,164 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42757%2C1689200312255.meta, suffix=.meta, logDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/WALs/jenkins-hbase4.apache.org,42757,1689200312255, archiveDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/oldWALs, maxLogs=32 2023-07-12 22:18:33,180 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34159,DS-01589847-8470-4208-b641-8ff00a5898c0,DISK] 2023-07-12 22:18:33,180 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45365,DS-fb92fad0-7b00-44a0-b0b1-7f9e4aaeb3cd,DISK] 2023-07-12 22:18:33,180 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43507,DS-09640063-8815-4067-8639-c94737342f71,DISK] 2023-07-12 22:18:33,183 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/WALs/jenkins-hbase4.apache.org,42757,1689200312255/jenkins-hbase4.apache.org%2C42757%2C1689200312255.meta.1689200313164.meta 2023-07-12 22:18:33,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34159,DS-01589847-8470-4208-b641-8ff00a5898c0,DISK], DatanodeInfoWithStorage[127.0.0.1:43507,DS-09640063-8815-4067-8639-c94737342f71,DISK], DatanodeInfoWithStorage[127.0.0.1:45365,DS-fb92fad0-7b00-44a0-b0b1-7f9e4aaeb3cd,DISK]] 2023-07-12 22:18:33,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:33,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 22:18:33,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 22:18:33,183 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 22:18:33,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 22:18:33,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:33,184 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 22:18:33,184 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 22:18:33,185 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 22:18:33,186 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/info 2023-07-12 22:18:33,186 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/info 2023-07-12 22:18:33,186 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 22:18:33,187 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:33,187 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 22:18:33,188 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/rep_barrier 2023-07-12 22:18:33,188 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/rep_barrier 2023-07-12 22:18:33,188 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 22:18:33,189 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:33,189 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 22:18:33,190 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/table 2023-07-12 22:18:33,190 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/table 2023-07-12 22:18:33,190 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 22:18:33,191 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:33,192 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740 2023-07-12 22:18:33,193 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740 2023-07-12 22:18:33,195 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 22:18:33,196 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 22:18:33,196 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9586171520, jitterRate=-0.10721820592880249}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 22:18:33,196 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 22:18:33,197 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689200313154 2023-07-12 22:18:33,201 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 22:18:33,202 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 22:18:33,202 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42757,1689200312255, state=OPEN 2023-07-12 22:18:33,205 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 22:18:33,205 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 22:18:33,206 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 22:18:33,207 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42757,1689200312255 in 205 msec 2023-07-12 22:18:33,208 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 22:18:33,208 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 365 msec 2023-07-12 22:18:33,209 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 433 msec 2023-07-12 22:18:33,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689200313209, completionTime=-1 2023-07-12 22:18:33,210 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 22:18:33,210 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 22:18:33,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 22:18:33,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689200373214 2023-07-12 22:18:33,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689200433214 2023-07-12 22:18:33,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-12 22:18:33,219 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39075,1689200311877-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:33,219 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39075,1689200311877-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:33,219 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39075,1689200311877-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:33,219 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:39075, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:33,219 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:33,219 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 22:18:33,219 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:33,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 22:18:33,221 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 22:18:33,221 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:33,222 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:33,223 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/hbase/namespace/899340293f6ff231e36e5d29b1a8cba0 2023-07-12 22:18:33,224 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/hbase/namespace/899340293f6ff231e36e5d29b1a8cba0 empty. 2023-07-12 22:18:33,224 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/hbase/namespace/899340293f6ff231e36e5d29b1a8cba0 2023-07-12 22:18:33,224 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 22:18:33,238 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:33,239 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 899340293f6ff231e36e5d29b1a8cba0, NAME => 'hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp 2023-07-12 22:18:33,249 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:33,249 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 899340293f6ff231e36e5d29b1a8cba0, disabling compactions & flushes 2023-07-12 22:18:33,249 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. 2023-07-12 22:18:33,249 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. 2023-07-12 22:18:33,249 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. after waiting 0 ms 2023-07-12 22:18:33,249 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. 2023-07-12 22:18:33,249 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. 2023-07-12 22:18:33,250 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 899340293f6ff231e36e5d29b1a8cba0: 2023-07-12 22:18:33,252 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:33,253 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200313253"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200313253"}]},"ts":"1689200313253"} 2023-07-12 22:18:33,255 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:33,256 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:33,256 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200313256"}]},"ts":"1689200313256"} 2023-07-12 22:18:33,257 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 22:18:33,261 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:33,261 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:33,261 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:33,261 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:33,261 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:33,261 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=899340293f6ff231e36e5d29b1a8cba0, ASSIGN}] 2023-07-12 22:18:33,264 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=899340293f6ff231e36e5d29b1a8cba0, ASSIGN 2023-07-12 22:18:33,265 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=899340293f6ff231e36e5d29b1a8cba0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33227,1689200312415; forceNewPlan=false, retain=false 2023-07-12 22:18:33,405 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39075,1689200311877] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:33,407 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39075,1689200311877] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 22:18:33,408 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:33,409 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:33,411 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/hbase/rsgroup/75fd3ede643aebe5790fef4a02d7db79 2023-07-12 22:18:33,411 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/hbase/rsgroup/75fd3ede643aebe5790fef4a02d7db79 empty. 2023-07-12 22:18:33,412 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/hbase/rsgroup/75fd3ede643aebe5790fef4a02d7db79 2023-07-12 22:18:33,412 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 22:18:33,415 INFO [jenkins-hbase4:39075] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:33,416 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=899340293f6ff231e36e5d29b1a8cba0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:33,416 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200313416"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200313416"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200313416"}]},"ts":"1689200313416"} 2023-07-12 22:18:33,418 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 899340293f6ff231e36e5d29b1a8cba0, server=jenkins-hbase4.apache.org,33227,1689200312415}] 2023-07-12 22:18:33,429 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:33,430 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 75fd3ede643aebe5790fef4a02d7db79, NAME => 'hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp 2023-07-12 22:18:33,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:33,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 75fd3ede643aebe5790fef4a02d7db79, disabling compactions & flushes 2023-07-12 22:18:33,439 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. 2023-07-12 22:18:33,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. 2023-07-12 22:18:33,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. after waiting 0 ms 2023-07-12 22:18:33,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. 2023-07-12 22:18:33,439 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. 2023-07-12 22:18:33,439 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 75fd3ede643aebe5790fef4a02d7db79: 2023-07-12 22:18:33,441 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:33,442 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689200313442"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200313442"}]},"ts":"1689200313442"} 2023-07-12 22:18:33,443 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:33,444 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:33,444 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200313444"}]},"ts":"1689200313444"} 2023-07-12 22:18:33,445 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 22:18:33,448 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:33,448 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:33,448 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:33,448 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:33,448 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:33,449 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=75fd3ede643aebe5790fef4a02d7db79, ASSIGN}] 2023-07-12 22:18:33,449 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=75fd3ede643aebe5790fef4a02d7db79, ASSIGN 2023-07-12 22:18:33,450 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=75fd3ede643aebe5790fef4a02d7db79, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33227,1689200312415; forceNewPlan=false, retain=false 2023-07-12 22:18:33,571 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:33,571 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:33,572 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39530, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:33,577 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. 2023-07-12 22:18:33,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 899340293f6ff231e36e5d29b1a8cba0, NAME => 'hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:33,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 899340293f6ff231e36e5d29b1a8cba0 2023-07-12 22:18:33,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:33,578 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 899340293f6ff231e36e5d29b1a8cba0 2023-07-12 22:18:33,578 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 899340293f6ff231e36e5d29b1a8cba0 2023-07-12 22:18:33,579 INFO [StoreOpener-899340293f6ff231e36e5d29b1a8cba0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 899340293f6ff231e36e5d29b1a8cba0 2023-07-12 22:18:33,580 DEBUG [StoreOpener-899340293f6ff231e36e5d29b1a8cba0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/namespace/899340293f6ff231e36e5d29b1a8cba0/info 2023-07-12 22:18:33,580 DEBUG [StoreOpener-899340293f6ff231e36e5d29b1a8cba0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/namespace/899340293f6ff231e36e5d29b1a8cba0/info 2023-07-12 22:18:33,580 INFO [StoreOpener-899340293f6ff231e36e5d29b1a8cba0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 899340293f6ff231e36e5d29b1a8cba0 columnFamilyName info 2023-07-12 22:18:33,581 INFO [StoreOpener-899340293f6ff231e36e5d29b1a8cba0-1] regionserver.HStore(310): Store=899340293f6ff231e36e5d29b1a8cba0/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:33,582 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/namespace/899340293f6ff231e36e5d29b1a8cba0 2023-07-12 22:18:33,582 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/namespace/899340293f6ff231e36e5d29b1a8cba0 2023-07-12 22:18:33,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 899340293f6ff231e36e5d29b1a8cba0 2023-07-12 22:18:33,587 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/namespace/899340293f6ff231e36e5d29b1a8cba0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:33,588 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 899340293f6ff231e36e5d29b1a8cba0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11737900160, jitterRate=0.09317713975906372}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:33,588 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 899340293f6ff231e36e5d29b1a8cba0: 2023-07-12 22:18:33,589 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0., pid=7, masterSystemTime=1689200313571 2023-07-12 22:18:33,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. 2023-07-12 22:18:33,593 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. 2023-07-12 22:18:33,593 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=899340293f6ff231e36e5d29b1a8cba0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:33,593 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200313593"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200313593"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200313593"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200313593"}]},"ts":"1689200313593"} 2023-07-12 22:18:33,596 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-12 22:18:33,596 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 899340293f6ff231e36e5d29b1a8cba0, server=jenkins-hbase4.apache.org,33227,1689200312415 in 176 msec 2023-07-12 22:18:33,598 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-12 22:18:33,598 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=899340293f6ff231e36e5d29b1a8cba0, ASSIGN in 335 msec 2023-07-12 22:18:33,598 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:33,598 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200313598"}]},"ts":"1689200313598"} 2023-07-12 22:18:33,599 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 22:18:33,600 INFO [jenkins-hbase4:39075] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:33,601 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=75fd3ede643aebe5790fef4a02d7db79, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:33,601 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689200313601"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200313601"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200313601"}]},"ts":"1689200313601"} 2023-07-12 22:18:33,602 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:33,602 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure 75fd3ede643aebe5790fef4a02d7db79, server=jenkins-hbase4.apache.org,33227,1689200312415}] 2023-07-12 22:18:33,604 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 383 msec 2023-07-12 22:18:33,621 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 22:18:33,623 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 22:18:33,623 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:33,627 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:33,628 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39538, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:33,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 22:18:33,637 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 22:18:33,640 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-12 22:18:33,642 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 22:18:33,643 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-12 22:18:33,643 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 22:18:33,758 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. 2023-07-12 22:18:33,758 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 75fd3ede643aebe5790fef4a02d7db79, NAME => 'hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:33,758 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 22:18:33,758 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. service=MultiRowMutationService 2023-07-12 22:18:33,758 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 22:18:33,758 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 75fd3ede643aebe5790fef4a02d7db79 2023-07-12 22:18:33,758 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:33,758 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 75fd3ede643aebe5790fef4a02d7db79 2023-07-12 22:18:33,758 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 75fd3ede643aebe5790fef4a02d7db79 2023-07-12 22:18:33,760 INFO [StoreOpener-75fd3ede643aebe5790fef4a02d7db79-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 75fd3ede643aebe5790fef4a02d7db79 2023-07-12 22:18:33,761 DEBUG [StoreOpener-75fd3ede643aebe5790fef4a02d7db79-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/rsgroup/75fd3ede643aebe5790fef4a02d7db79/m 2023-07-12 22:18:33,761 DEBUG [StoreOpener-75fd3ede643aebe5790fef4a02d7db79-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/rsgroup/75fd3ede643aebe5790fef4a02d7db79/m 2023-07-12 22:18:33,761 INFO [StoreOpener-75fd3ede643aebe5790fef4a02d7db79-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 75fd3ede643aebe5790fef4a02d7db79 columnFamilyName m 2023-07-12 22:18:33,762 INFO [StoreOpener-75fd3ede643aebe5790fef4a02d7db79-1] regionserver.HStore(310): Store=75fd3ede643aebe5790fef4a02d7db79/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:33,763 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/rsgroup/75fd3ede643aebe5790fef4a02d7db79 2023-07-12 22:18:33,763 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/rsgroup/75fd3ede643aebe5790fef4a02d7db79 2023-07-12 22:18:33,765 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 75fd3ede643aebe5790fef4a02d7db79 2023-07-12 22:18:33,767 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/rsgroup/75fd3ede643aebe5790fef4a02d7db79/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:33,768 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 75fd3ede643aebe5790fef4a02d7db79; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@77f48e7f, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:33,768 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 75fd3ede643aebe5790fef4a02d7db79: 2023-07-12 22:18:33,768 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79., pid=9, masterSystemTime=1689200313754 2023-07-12 22:18:33,770 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. 2023-07-12 22:18:33,770 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. 2023-07-12 22:18:33,770 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=75fd3ede643aebe5790fef4a02d7db79, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:33,770 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689200313770"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200313770"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200313770"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200313770"}]},"ts":"1689200313770"} 2023-07-12 22:18:33,773 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-12 22:18:33,773 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure 75fd3ede643aebe5790fef4a02d7db79, server=jenkins-hbase4.apache.org,33227,1689200312415 in 169 msec 2023-07-12 22:18:33,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-12 22:18:33,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=75fd3ede643aebe5790fef4a02d7db79, ASSIGN in 324 msec 2023-07-12 22:18:33,784 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 22:18:33,789 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 145 msec 2023-07-12 22:18:33,789 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:33,790 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200313789"}]},"ts":"1689200313789"} 2023-07-12 22:18:33,791 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 22:18:33,793 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:33,798 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 22:18:33,799 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 388 msec 2023-07-12 22:18:33,802 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 22:18:33,802 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.221sec 2023-07-12 22:18:33,802 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-12 22:18:33,802 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:33,803 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-12 22:18:33,803 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-12 22:18:33,805 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:33,806 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:33,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-12 22:18:33,808 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/hbase/quota/2d4696434f4528a2d197b56bdf6d7df4 2023-07-12 22:18:33,809 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/hbase/quota/2d4696434f4528a2d197b56bdf6d7df4 empty. 2023-07-12 22:18:33,809 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/hbase/quota/2d4696434f4528a2d197b56bdf6d7df4 2023-07-12 22:18:33,809 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-12 22:18:33,814 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39075,1689200311877] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 22:18:33,815 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39075,1689200311877] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 22:18:33,821 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-12 22:18:33,821 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-12 22:18:33,823 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:33,823 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39075,1689200311877] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:33,824 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:33,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:33,825 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39075,1689200311877] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 22:18:33,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 22:18:33,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 22:18:33,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39075,1689200311877-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 22:18:33,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39075,1689200311877-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 22:18:33,827 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39075,1689200311877] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 22:18:33,842 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 22:18:33,844 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:33,845 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2d4696434f4528a2d197b56bdf6d7df4, NAME => 'hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp 2023-07-12 22:18:33,862 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:33,862 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 2d4696434f4528a2d197b56bdf6d7df4, disabling compactions & flushes 2023-07-12 22:18:33,862 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. 2023-07-12 22:18:33,862 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. 2023-07-12 22:18:33,862 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. after waiting 0 ms 2023-07-12 22:18:33,862 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. 2023-07-12 22:18:33,862 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. 2023-07-12 22:18:33,862 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 2d4696434f4528a2d197b56bdf6d7df4: 2023-07-12 22:18:33,865 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:33,866 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689200313866"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200313866"}]},"ts":"1689200313866"} 2023-07-12 22:18:33,868 ERROR [Listener at localhost/43911] master.TableStateManager(95): Unable to get table hbase:quota state org.apache.hadoop.hbase.TableNotFoundException: No state found for hbase:quota at org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:155) at org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:92) at org.apache.hadoop.hbase.master.assignment.AssignmentManager.isTableDisabled(AssignmentManager.java:419) at org.apache.hadoop.hbase.master.assignment.AssignmentManager.getRegionStatesCount(AssignmentManager.java:2341) at org.apache.hadoop.hbase.master.HMaster.getClusterMetricsWithoutCoprocessor(HMaster.java:2616) at org.apache.hadoop.hbase.master.HMaster.getClusterMetrics(HMaster.java:2640) at org.apache.hadoop.hbase.master.HMaster.getClusterMetrics(HMaster.java:2633) at org.apache.hadoop.hbase.MiniHBaseCluster.getClusterMetrics(MiniHBaseCluster.java:710) at org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:111) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1131) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1094) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1048) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.toggleQuotaCheckAndRestartMiniCluster(TestRSGroupsAdmin1.java:492) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.testRSGroupListDoesNotContainFailedTableCreation(TestRSGroupsAdmin1.java:410) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) 2023-07-12 22:18:33,868 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:33,868 DEBUG [Listener at localhost/43911] zookeeper.ReadOnlyZKClient(139): Connect 0x5d6a3efb to 127.0.0.1:54162 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:33,869 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:33,869 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200313869"}]},"ts":"1689200313869"} 2023-07-12 22:18:33,870 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-12 22:18:33,874 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:33,874 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:33,874 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:33,875 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:33,875 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:33,875 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=2d4696434f4528a2d197b56bdf6d7df4, ASSIGN}] 2023-07-12 22:18:33,876 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=2d4696434f4528a2d197b56bdf6d7df4, ASSIGN 2023-07-12 22:18:33,876 DEBUG [Listener at localhost/43911] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1fb2b24c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:33,877 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=2d4696434f4528a2d197b56bdf6d7df4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41315,1689200312070; forceNewPlan=false, retain=false 2023-07-12 22:18:33,878 DEBUG [hconnection-0x4162bfb9-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:33,880 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56066, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:33,881 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,39075,1689200311877 2023-07-12 22:18:33,881 INFO [Listener at localhost/43911] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:33,884 DEBUG [Listener at localhost/43911] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 22:18:33,886 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56048, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 22:18:33,890 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 22:18:33,890 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:33,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-12 22:18:33,891 DEBUG [Listener at localhost/43911] zookeeper.ReadOnlyZKClient(139): Connect 0x23510898 to 127.0.0.1:54162 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:33,898 DEBUG [Listener at localhost/43911] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@779af43c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:33,898 INFO [Listener at localhost/43911] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:54162 2023-07-12 22:18:33,904 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:33,905 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015b9dc12d000a connected 2023-07-12 22:18:33,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-12 22:18:33,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-12 22:18:33,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-12 22:18:33,924 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 22:18:33,926 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 17 msec 2023-07-12 22:18:34,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-12 22:18:34,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:34,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-12 22:18:34,024 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:34,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-12 22:18:34,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 22:18:34,026 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:34,026 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 22:18:34,027 INFO [jenkins-hbase4:39075] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:34,028 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2d4696434f4528a2d197b56bdf6d7df4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:34,028 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689200314028"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200314028"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200314028"}]},"ts":"1689200314028"} 2023-07-12 22:18:34,029 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:34,029 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 2d4696434f4528a2d197b56bdf6d7df4, server=jenkins-hbase4.apache.org,41315,1689200312070}] 2023-07-12 22:18:34,030 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd 2023-07-12 22:18:34,031 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd empty. 2023-07-12 22:18:34,031 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd 2023-07-12 22:18:34,032 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-12 22:18:34,045 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:34,046 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => b3e649d4c2182194ad911a47cf267bcd, NAME => 'np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp 2023-07-12 22:18:34,053 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:34,053 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing b3e649d4c2182194ad911a47cf267bcd, disabling compactions & flushes 2023-07-12 22:18:34,053 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd. 2023-07-12 22:18:34,053 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd. 2023-07-12 22:18:34,054 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd. after waiting 0 ms 2023-07-12 22:18:34,054 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd. 2023-07-12 22:18:34,054 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd. 2023-07-12 22:18:34,054 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for b3e649d4c2182194ad911a47cf267bcd: 2023-07-12 22:18:34,055 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:34,056 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689200314056"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200314056"}]},"ts":"1689200314056"} 2023-07-12 22:18:34,057 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:34,058 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:34,058 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200314058"}]},"ts":"1689200314058"} 2023-07-12 22:18:34,059 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-12 22:18:34,063 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:34,063 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:34,063 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:34,063 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:34,063 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:34,063 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=b3e649d4c2182194ad911a47cf267bcd, ASSIGN}] 2023-07-12 22:18:34,064 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=b3e649d4c2182194ad911a47cf267bcd, ASSIGN 2023-07-12 22:18:34,064 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=b3e649d4c2182194ad911a47cf267bcd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41315,1689200312070; forceNewPlan=false, retain=false 2023-07-12 22:18:34,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 22:18:34,182 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:34,182 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:34,184 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40822, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:34,188 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. 2023-07-12 22:18:34,188 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2d4696434f4528a2d197b56bdf6d7df4, NAME => 'hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:34,189 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 2d4696434f4528a2d197b56bdf6d7df4 2023-07-12 22:18:34,189 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:34,189 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2d4696434f4528a2d197b56bdf6d7df4 2023-07-12 22:18:34,189 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2d4696434f4528a2d197b56bdf6d7df4 2023-07-12 22:18:34,190 INFO [StoreOpener-2d4696434f4528a2d197b56bdf6d7df4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 2d4696434f4528a2d197b56bdf6d7df4 2023-07-12 22:18:34,191 DEBUG [StoreOpener-2d4696434f4528a2d197b56bdf6d7df4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/quota/2d4696434f4528a2d197b56bdf6d7df4/q 2023-07-12 22:18:34,191 DEBUG [StoreOpener-2d4696434f4528a2d197b56bdf6d7df4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/quota/2d4696434f4528a2d197b56bdf6d7df4/q 2023-07-12 22:18:34,192 INFO [StoreOpener-2d4696434f4528a2d197b56bdf6d7df4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2d4696434f4528a2d197b56bdf6d7df4 columnFamilyName q 2023-07-12 22:18:34,192 INFO [StoreOpener-2d4696434f4528a2d197b56bdf6d7df4-1] regionserver.HStore(310): Store=2d4696434f4528a2d197b56bdf6d7df4/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:34,192 INFO [StoreOpener-2d4696434f4528a2d197b56bdf6d7df4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 2d4696434f4528a2d197b56bdf6d7df4 2023-07-12 22:18:34,193 DEBUG [StoreOpener-2d4696434f4528a2d197b56bdf6d7df4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/quota/2d4696434f4528a2d197b56bdf6d7df4/u 2023-07-12 22:18:34,193 DEBUG [StoreOpener-2d4696434f4528a2d197b56bdf6d7df4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/quota/2d4696434f4528a2d197b56bdf6d7df4/u 2023-07-12 22:18:34,194 INFO [StoreOpener-2d4696434f4528a2d197b56bdf6d7df4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2d4696434f4528a2d197b56bdf6d7df4 columnFamilyName u 2023-07-12 22:18:34,194 INFO [StoreOpener-2d4696434f4528a2d197b56bdf6d7df4-1] regionserver.HStore(310): Store=2d4696434f4528a2d197b56bdf6d7df4/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:34,195 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/quota/2d4696434f4528a2d197b56bdf6d7df4 2023-07-12 22:18:34,195 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/quota/2d4696434f4528a2d197b56bdf6d7df4 2023-07-12 22:18:34,197 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-12 22:18:34,198 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2d4696434f4528a2d197b56bdf6d7df4 2023-07-12 22:18:34,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/quota/2d4696434f4528a2d197b56bdf6d7df4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:34,201 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2d4696434f4528a2d197b56bdf6d7df4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11807027840, jitterRate=0.09961515665054321}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-12 22:18:34,201 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2d4696434f4528a2d197b56bdf6d7df4: 2023-07-12 22:18:34,202 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4., pid=16, masterSystemTime=1689200314182 2023-07-12 22:18:34,205 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. 2023-07-12 22:18:34,206 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. 2023-07-12 22:18:34,206 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2d4696434f4528a2d197b56bdf6d7df4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:34,206 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689200314206"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200314206"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200314206"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200314206"}]},"ts":"1689200314206"} 2023-07-12 22:18:34,208 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-12 22:18:34,209 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 2d4696434f4528a2d197b56bdf6d7df4, server=jenkins-hbase4.apache.org,41315,1689200312070 in 178 msec 2023-07-12 22:18:34,210 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-12 22:18:34,210 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=2d4696434f4528a2d197b56bdf6d7df4, ASSIGN in 334 msec 2023-07-12 22:18:34,211 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:34,211 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200314211"}]},"ts":"1689200314211"} 2023-07-12 22:18:34,212 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-12 22:18:34,214 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:34,215 INFO [jenkins-hbase4:39075] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:34,216 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=b3e649d4c2182194ad911a47cf267bcd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:34,216 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689200314216"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200314216"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200314216"}]},"ts":"1689200314216"} 2023-07-12 22:18:34,216 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 412 msec 2023-07-12 22:18:34,217 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure b3e649d4c2182194ad911a47cf267bcd, server=jenkins-hbase4.apache.org,41315,1689200312070}] 2023-07-12 22:18:34,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 22:18:34,372 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd. 2023-07-12 22:18:34,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b3e649d4c2182194ad911a47cf267bcd, NAME => 'np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:34,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 b3e649d4c2182194ad911a47cf267bcd 2023-07-12 22:18:34,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:34,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b3e649d4c2182194ad911a47cf267bcd 2023-07-12 22:18:34,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b3e649d4c2182194ad911a47cf267bcd 2023-07-12 22:18:34,374 INFO [StoreOpener-b3e649d4c2182194ad911a47cf267bcd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region b3e649d4c2182194ad911a47cf267bcd 2023-07-12 22:18:34,375 DEBUG [StoreOpener-b3e649d4c2182194ad911a47cf267bcd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd/fam1 2023-07-12 22:18:34,375 DEBUG [StoreOpener-b3e649d4c2182194ad911a47cf267bcd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd/fam1 2023-07-12 22:18:34,376 INFO [StoreOpener-b3e649d4c2182194ad911a47cf267bcd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b3e649d4c2182194ad911a47cf267bcd columnFamilyName fam1 2023-07-12 22:18:34,376 INFO [StoreOpener-b3e649d4c2182194ad911a47cf267bcd-1] regionserver.HStore(310): Store=b3e649d4c2182194ad911a47cf267bcd/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:34,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd 2023-07-12 22:18:34,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd 2023-07-12 22:18:34,380 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b3e649d4c2182194ad911a47cf267bcd 2023-07-12 22:18:34,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:34,383 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b3e649d4c2182194ad911a47cf267bcd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9988644480, jitterRate=-0.06973499059677124}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:34,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b3e649d4c2182194ad911a47cf267bcd: 2023-07-12 22:18:34,384 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd., pid=18, masterSystemTime=1689200314368 2023-07-12 22:18:34,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd. 2023-07-12 22:18:34,385 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd. 2023-07-12 22:18:34,385 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=b3e649d4c2182194ad911a47cf267bcd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:34,385 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689200314385"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200314385"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200314385"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200314385"}]},"ts":"1689200314385"} 2023-07-12 22:18:34,388 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-12 22:18:34,388 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure b3e649d4c2182194ad911a47cf267bcd, server=jenkins-hbase4.apache.org,41315,1689200312070 in 170 msec 2023-07-12 22:18:34,390 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-12 22:18:34,390 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=b3e649d4c2182194ad911a47cf267bcd, ASSIGN in 325 msec 2023-07-12 22:18:34,391 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:34,391 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200314391"}]},"ts":"1689200314391"} 2023-07-12 22:18:34,392 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-12 22:18:34,394 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:34,396 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 373 msec 2023-07-12 22:18:34,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 22:18:34,628 INFO [Listener at localhost/43911] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-12 22:18:34,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:34,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-12 22:18:34,632 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:34,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-12 22:18:34,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 22:18:34,649 DEBUG [PEWorker-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:34,651 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40830, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:34,654 INFO [PEWorker-4] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=24 msec 2023-07-12 22:18:34,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 22:18:34,738 INFO [Listener at localhost/43911] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-12 22:18:34,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:34,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:34,741 INFO [Listener at localhost/43911] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-12 22:18:34,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-12 22:18:34,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-12 22:18:34,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 22:18:34,745 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200314744"}]},"ts":"1689200314744"} 2023-07-12 22:18:34,746 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-12 22:18:34,749 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-12 22:18:34,750 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=b3e649d4c2182194ad911a47cf267bcd, UNASSIGN}] 2023-07-12 22:18:34,750 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=b3e649d4c2182194ad911a47cf267bcd, UNASSIGN 2023-07-12 22:18:34,751 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=b3e649d4c2182194ad911a47cf267bcd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:34,751 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689200314751"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200314751"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200314751"}]},"ts":"1689200314751"} 2023-07-12 22:18:34,752 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure b3e649d4c2182194ad911a47cf267bcd, server=jenkins-hbase4.apache.org,41315,1689200312070}] 2023-07-12 22:18:34,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 22:18:34,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b3e649d4c2182194ad911a47cf267bcd 2023-07-12 22:18:34,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b3e649d4c2182194ad911a47cf267bcd, disabling compactions & flushes 2023-07-12 22:18:34,905 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd. 2023-07-12 22:18:34,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd. 2023-07-12 22:18:34,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd. after waiting 0 ms 2023-07-12 22:18:34,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd. 2023-07-12 22:18:34,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:34,909 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd. 2023-07-12 22:18:34,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b3e649d4c2182194ad911a47cf267bcd: 2023-07-12 22:18:34,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b3e649d4c2182194ad911a47cf267bcd 2023-07-12 22:18:34,912 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=b3e649d4c2182194ad911a47cf267bcd, regionState=CLOSED 2023-07-12 22:18:34,912 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689200314912"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200314912"}]},"ts":"1689200314912"} 2023-07-12 22:18:34,914 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-12 22:18:34,915 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure b3e649d4c2182194ad911a47cf267bcd, server=jenkins-hbase4.apache.org,41315,1689200312070 in 161 msec 2023-07-12 22:18:34,916 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-12 22:18:34,916 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=b3e649d4c2182194ad911a47cf267bcd, UNASSIGN in 166 msec 2023-07-12 22:18:34,916 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200314916"}]},"ts":"1689200314916"} 2023-07-12 22:18:34,917 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-12 22:18:34,920 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-12 22:18:34,922 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 180 msec 2023-07-12 22:18:35,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 22:18:35,046 INFO [Listener at localhost/43911] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-12 22:18:35,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-12 22:18:35,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-12 22:18:35,049 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 22:18:35,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-12 22:18:35,050 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 22:18:35,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:35,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 22:18:35,053 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd 2023-07-12 22:18:35,055 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd/fam1, FileablePath, hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd/recovered.edits] 2023-07-12 22:18:35,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 22:18:35,060 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd/recovered.edits/4.seqid to hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/archive/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd/recovered.edits/4.seqid 2023-07-12 22:18:35,061 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/.tmp/data/np1/table1/b3e649d4c2182194ad911a47cf267bcd 2023-07-12 22:18:35,061 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-12 22:18:35,063 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 22:18:35,064 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-12 22:18:35,066 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-12 22:18:35,067 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 22:18:35,067 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-12 22:18:35,067 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200315067"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:35,068 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 22:18:35,068 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b3e649d4c2182194ad911a47cf267bcd, NAME => 'np1:table1,,1689200314020.b3e649d4c2182194ad911a47cf267bcd.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 22:18:35,068 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-12 22:18:35,069 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689200315068"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:35,070 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-12 22:18:35,071 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 22:18:35,072 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 24 msec 2023-07-12 22:18:35,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 22:18:35,156 INFO [Listener at localhost/43911] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-12 22:18:35,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-12 22:18:35,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-12 22:18:35,169 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 22:18:35,172 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 22:18:35,175 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 22:18:35,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-12 22:18:35,176 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-12 22:18:35,176 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 22:18:35,177 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 22:18:35,179 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 22:18:35,180 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 18 msec 2023-07-12 22:18:35,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39075] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-12 22:18:35,277 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 22:18:35,277 INFO [Listener at localhost/43911] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 22:18:35,277 DEBUG [Listener at localhost/43911] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5d6a3efb to 127.0.0.1:54162 2023-07-12 22:18:35,277 DEBUG [Listener at localhost/43911] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:35,277 DEBUG [Listener at localhost/43911] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 22:18:35,277 DEBUG [Listener at localhost/43911] util.JVMClusterUtil(257): Found active master hash=536696141, stopped=false 2023-07-12 22:18:35,277 DEBUG [Listener at localhost/43911] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 22:18:35,278 DEBUG [Listener at localhost/43911] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 22:18:35,278 DEBUG [Listener at localhost/43911] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-12 22:18:35,278 INFO [Listener at localhost/43911] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,39075,1689200311877 2023-07-12 22:18:35,280 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:35,280 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:35,280 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:35,280 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:35,280 INFO [Listener at localhost/43911] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 22:18:35,280 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:35,281 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:35,281 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:35,281 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:35,281 DEBUG [Listener at localhost/43911] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x48d56ddd to 127.0.0.1:54162 2023-07-12 22:18:35,282 DEBUG [Listener at localhost/43911] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:35,283 INFO [Listener at localhost/43911] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41315,1689200312070' ***** 2023-07-12 22:18:35,283 INFO [Listener at localhost/43911] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 22:18:35,283 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:35,283 INFO [Listener at localhost/43911] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42757,1689200312255' ***** 2023-07-12 22:18:35,283 INFO [Listener at localhost/43911] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 22:18:35,283 INFO [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:35,283 INFO [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:35,283 INFO [Listener at localhost/43911] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33227,1689200312415' ***** 2023-07-12 22:18:35,283 INFO [Listener at localhost/43911] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 22:18:35,289 INFO [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:35,297 INFO [RS:1;jenkins-hbase4:42757] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@20f82403{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:35,297 INFO [RS:0;jenkins-hbase4:41315] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4cd8638e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:35,297 INFO [RS:2;jenkins-hbase4:33227] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2a4e8813{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:35,297 INFO [RS:0;jenkins-hbase4:41315] server.AbstractConnector(383): Stopped ServerConnector@20156020{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:35,297 INFO [RS:1;jenkins-hbase4:42757] server.AbstractConnector(383): Stopped ServerConnector@614ff1bd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:35,297 INFO [RS:2;jenkins-hbase4:33227] server.AbstractConnector(383): Stopped ServerConnector@50ea7397{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:35,297 INFO [RS:1;jenkins-hbase4:42757] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:35,297 INFO [RS:0;jenkins-hbase4:41315] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:35,297 INFO [RS:2;jenkins-hbase4:33227] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:35,300 INFO [RS:0;jenkins-hbase4:41315] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6703a120{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:35,300 INFO [RS:1;jenkins-hbase4:42757] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@601eee72{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:35,300 INFO [RS:0;jenkins-hbase4:41315] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@14dd322{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:35,300 INFO [RS:2;jenkins-hbase4:33227] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d986ee8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:35,300 INFO [RS:1;jenkins-hbase4:42757] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7b6d1808{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:35,301 INFO [RS:2;jenkins-hbase4:33227] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6a2ebbcc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:35,301 INFO [RS:1;jenkins-hbase4:42757] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 22:18:35,301 INFO [RS:2;jenkins-hbase4:33227] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 22:18:35,301 INFO [RS:1;jenkins-hbase4:42757] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 22:18:35,301 INFO [RS:2;jenkins-hbase4:33227] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 22:18:35,301 INFO [RS:2;jenkins-hbase4:33227] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 22:18:35,301 INFO [RS:0;jenkins-hbase4:41315] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 22:18:35,301 INFO [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(3305): Received CLOSE for 899340293f6ff231e36e5d29b1a8cba0 2023-07-12 22:18:35,301 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 22:18:35,301 INFO [RS:1;jenkins-hbase4:42757] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 22:18:35,303 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 22:18:35,303 INFO [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42757,1689200312255 2023-07-12 22:18:35,301 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 22:18:35,303 DEBUG [RS:1;jenkins-hbase4:42757] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x37991f31 to 127.0.0.1:54162 2023-07-12 22:18:35,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 899340293f6ff231e36e5d29b1a8cba0, disabling compactions & flushes 2023-07-12 22:18:35,305 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. 2023-07-12 22:18:35,303 INFO [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(3305): Received CLOSE for 75fd3ede643aebe5790fef4a02d7db79 2023-07-12 22:18:35,301 INFO [RS:0;jenkins-hbase4:41315] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 22:18:35,305 INFO [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:35,305 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. 2023-07-12 22:18:35,304 DEBUG [RS:1;jenkins-hbase4:42757] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:35,305 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. after waiting 0 ms 2023-07-12 22:18:35,305 DEBUG [RS:2;jenkins-hbase4:33227] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x033689d1 to 127.0.0.1:54162 2023-07-12 22:18:35,305 INFO [RS:0;jenkins-hbase4:41315] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 22:18:35,305 DEBUG [RS:2;jenkins-hbase4:33227] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:35,305 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. 2023-07-12 22:18:35,305 INFO [RS:1;jenkins-hbase4:42757] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 22:18:35,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 899340293f6ff231e36e5d29b1a8cba0 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-12 22:18:35,306 INFO [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-12 22:18:35,306 INFO [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(3305): Received CLOSE for 2d4696434f4528a2d197b56bdf6d7df4 2023-07-12 22:18:35,306 DEBUG [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(1478): Online Regions={899340293f6ff231e36e5d29b1a8cba0=hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0., 75fd3ede643aebe5790fef4a02d7db79=hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79.} 2023-07-12 22:18:35,307 INFO [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:35,306 INFO [RS:1;jenkins-hbase4:42757] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 22:18:35,307 DEBUG [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(1504): Waiting on 75fd3ede643aebe5790fef4a02d7db79, 899340293f6ff231e36e5d29b1a8cba0 2023-07-12 22:18:35,307 DEBUG [RS:0;jenkins-hbase4:41315] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x70bf8c7e to 127.0.0.1:54162 2023-07-12 22:18:35,308 DEBUG [RS:0;jenkins-hbase4:41315] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:35,309 INFO [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 22:18:35,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2d4696434f4528a2d197b56bdf6d7df4, disabling compactions & flushes 2023-07-12 22:18:35,309 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. 2023-07-12 22:18:35,309 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. 2023-07-12 22:18:35,309 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. after waiting 0 ms 2023-07-12 22:18:35,307 INFO [RS:1;jenkins-hbase4:42757] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 22:18:35,309 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. 2023-07-12 22:18:35,309 INFO [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 22:18:35,309 DEBUG [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(1478): Online Regions={2d4696434f4528a2d197b56bdf6d7df4=hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4.} 2023-07-12 22:18:35,310 DEBUG [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(1504): Waiting on 2d4696434f4528a2d197b56bdf6d7df4 2023-07-12 22:18:35,311 INFO [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 22:18:35,311 DEBUG [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-12 22:18:35,311 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 22:18:35,311 DEBUG [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 22:18:35,311 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 22:18:35,311 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 22:18:35,311 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 22:18:35,311 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 22:18:35,311 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-12 22:18:35,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/quota/2d4696434f4528a2d197b56bdf6d7df4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:35,315 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. 2023-07-12 22:18:35,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2d4696434f4528a2d197b56bdf6d7df4: 2023-07-12 22:18:35,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689200313802.2d4696434f4528a2d197b56bdf6d7df4. 2023-07-12 22:18:35,331 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/namespace/899340293f6ff231e36e5d29b1a8cba0/.tmp/info/c67bd4013dc04e9a9b675a639555f3e0 2023-07-12 22:18:35,338 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c67bd4013dc04e9a9b675a639555f3e0 2023-07-12 22:18:35,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/namespace/899340293f6ff231e36e5d29b1a8cba0/.tmp/info/c67bd4013dc04e9a9b675a639555f3e0 as hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/namespace/899340293f6ff231e36e5d29b1a8cba0/info/c67bd4013dc04e9a9b675a639555f3e0 2023-07-12 22:18:35,339 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/.tmp/info/a65c037bba324670a7e43c7bcce34a9f 2023-07-12 22:18:35,342 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:35,348 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c67bd4013dc04e9a9b675a639555f3e0 2023-07-12 22:18:35,348 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a65c037bba324670a7e43c7bcce34a9f 2023-07-12 22:18:35,348 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/namespace/899340293f6ff231e36e5d29b1a8cba0/info/c67bd4013dc04e9a9b675a639555f3e0, entries=3, sequenceid=8, filesize=5.0 K 2023-07-12 22:18:35,351 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 899340293f6ff231e36e5d29b1a8cba0 in 45ms, sequenceid=8, compaction requested=false 2023-07-12 22:18:35,351 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 22:18:35,353 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:35,356 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:35,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/namespace/899340293f6ff231e36e5d29b1a8cba0/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-12 22:18:35,362 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. 2023-07-12 22:18:35,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 899340293f6ff231e36e5d29b1a8cba0: 2023-07-12 22:18:35,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689200313219.899340293f6ff231e36e5d29b1a8cba0. 2023-07-12 22:18:35,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 75fd3ede643aebe5790fef4a02d7db79, disabling compactions & flushes 2023-07-12 22:18:35,362 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. 2023-07-12 22:18:35,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. 2023-07-12 22:18:35,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. after waiting 0 ms 2023-07-12 22:18:35,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. 2023-07-12 22:18:35,363 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 75fd3ede643aebe5790fef4a02d7db79 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-12 22:18:35,371 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/.tmp/rep_barrier/041c005eb7654f0a96d3a9c5e0b001e8 2023-07-12 22:18:35,376 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 041c005eb7654f0a96d3a9c5e0b001e8 2023-07-12 22:18:35,381 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/rsgroup/75fd3ede643aebe5790fef4a02d7db79/.tmp/m/f4e505d27ec14336b7bce7d3f36f8c9b 2023-07-12 22:18:35,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/rsgroup/75fd3ede643aebe5790fef4a02d7db79/.tmp/m/f4e505d27ec14336b7bce7d3f36f8c9b as hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/rsgroup/75fd3ede643aebe5790fef4a02d7db79/m/f4e505d27ec14336b7bce7d3f36f8c9b 2023-07-12 22:18:35,389 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/.tmp/table/cf0a4074d6864b3a8fe721d3f0b39bb7 2023-07-12 22:18:35,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/rsgroup/75fd3ede643aebe5790fef4a02d7db79/m/f4e505d27ec14336b7bce7d3f36f8c9b, entries=1, sequenceid=7, filesize=4.9 K 2023-07-12 22:18:35,395 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cf0a4074d6864b3a8fe721d3f0b39bb7 2023-07-12 22:18:35,395 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 75fd3ede643aebe5790fef4a02d7db79 in 32ms, sequenceid=7, compaction requested=false 2023-07-12 22:18:35,396 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/.tmp/info/a65c037bba324670a7e43c7bcce34a9f as hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/info/a65c037bba324670a7e43c7bcce34a9f 2023-07-12 22:18:35,403 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a65c037bba324670a7e43c7bcce34a9f 2023-07-12 22:18:35,403 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/info/a65c037bba324670a7e43c7bcce34a9f, entries=32, sequenceid=31, filesize=8.5 K 2023-07-12 22:18:35,404 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/.tmp/rep_barrier/041c005eb7654f0a96d3a9c5e0b001e8 as hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/rep_barrier/041c005eb7654f0a96d3a9c5e0b001e8 2023-07-12 22:18:35,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/rsgroup/75fd3ede643aebe5790fef4a02d7db79/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-12 22:18:35,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 22:18:35,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. 2023-07-12 22:18:35,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 75fd3ede643aebe5790fef4a02d7db79: 2023-07-12 22:18:35,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689200313405.75fd3ede643aebe5790fef4a02d7db79. 2023-07-12 22:18:35,411 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 041c005eb7654f0a96d3a9c5e0b001e8 2023-07-12 22:18:35,411 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/rep_barrier/041c005eb7654f0a96d3a9c5e0b001e8, entries=1, sequenceid=31, filesize=4.9 K 2023-07-12 22:18:35,412 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/.tmp/table/cf0a4074d6864b3a8fe721d3f0b39bb7 as hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/table/cf0a4074d6864b3a8fe721d3f0b39bb7 2023-07-12 22:18:35,421 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cf0a4074d6864b3a8fe721d3f0b39bb7 2023-07-12 22:18:35,421 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/table/cf0a4074d6864b3a8fe721d3f0b39bb7, entries=8, sequenceid=31, filesize=5.2 K 2023-07-12 22:18:35,423 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 111ms, sequenceid=31, compaction requested=false 2023-07-12 22:18:35,454 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-12 22:18:35,456 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 22:18:35,456 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 22:18:35,456 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 22:18:35,456 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 22:18:35,508 INFO [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33227,1689200312415; all regions closed. 2023-07-12 22:18:35,509 DEBUG [RS:2;jenkins-hbase4:33227] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 22:18:35,510 INFO [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41315,1689200312070; all regions closed. 2023-07-12 22:18:35,510 DEBUG [RS:0;jenkins-hbase4:41315] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 22:18:35,511 INFO [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42757,1689200312255; all regions closed. 2023-07-12 22:18:35,511 DEBUG [RS:1;jenkins-hbase4:42757] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 22:18:35,523 DEBUG [RS:2;jenkins-hbase4:33227] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/oldWALs 2023-07-12 22:18:35,523 INFO [RS:2;jenkins-hbase4:33227] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33227%2C1689200312415:(num 1689200313078) 2023-07-12 22:18:35,523 DEBUG [RS:2;jenkins-hbase4:33227] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:35,524 INFO [RS:2;jenkins-hbase4:33227] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:35,524 INFO [RS:2;jenkins-hbase4:33227] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 22:18:35,524 INFO [RS:2;jenkins-hbase4:33227] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 22:18:35,524 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:35,524 INFO [RS:2;jenkins-hbase4:33227] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 22:18:35,524 INFO [RS:2;jenkins-hbase4:33227] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 22:18:35,524 DEBUG [RS:1;jenkins-hbase4:42757] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/oldWALs 2023-07-12 22:18:35,524 INFO [RS:1;jenkins-hbase4:42757] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42757%2C1689200312255.meta:.meta(num 1689200313164) 2023-07-12 22:18:35,525 INFO [RS:2;jenkins-hbase4:33227] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33227 2023-07-12 22:18:35,531 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:35,531 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:35,531 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:35,531 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:35,531 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33227,1689200312415 2023-07-12 22:18:35,531 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:35,531 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:35,532 DEBUG [RS:0;jenkins-hbase4:41315] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/oldWALs 2023-07-12 22:18:35,532 INFO [RS:0;jenkins-hbase4:41315] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41315%2C1689200312070:(num 1689200313083) 2023-07-12 22:18:35,532 DEBUG [RS:0;jenkins-hbase4:41315] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:35,532 INFO [RS:0;jenkins-hbase4:41315] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:35,533 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33227,1689200312415] 2023-07-12 22:18:35,533 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33227,1689200312415; numProcessing=1 2023-07-12 22:18:35,533 INFO [RS:0;jenkins-hbase4:41315] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 22:18:35,533 INFO [RS:0;jenkins-hbase4:41315] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 22:18:35,533 INFO [RS:0;jenkins-hbase4:41315] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 22:18:35,533 INFO [RS:0;jenkins-hbase4:41315] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 22:18:35,533 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:35,534 INFO [RS:0;jenkins-hbase4:41315] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41315 2023-07-12 22:18:35,536 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33227,1689200312415 already deleted, retry=false 2023-07-12 22:18:35,536 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33227,1689200312415 expired; onlineServers=2 2023-07-12 22:18:35,537 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:35,537 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:35,538 DEBUG [RS:1;jenkins-hbase4:42757] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/oldWALs 2023-07-12 22:18:35,538 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41315,1689200312070] 2023-07-12 22:18:35,538 INFO [RS:1;jenkins-hbase4:42757] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42757%2C1689200312255:(num 1689200313075) 2023-07-12 22:18:35,538 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41315,1689200312070; numProcessing=2 2023-07-12 22:18:35,538 DEBUG [RS:1;jenkins-hbase4:42757] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:35,538 INFO [RS:1;jenkins-hbase4:42757] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:35,538 INFO [RS:1;jenkins-hbase4:42757] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 22:18:35,538 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41315,1689200312070 2023-07-12 22:18:35,539 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:35,539 INFO [RS:1;jenkins-hbase4:42757] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42757 2023-07-12 22:18:35,541 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41315,1689200312070 already deleted, retry=false 2023-07-12 22:18:35,542 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41315,1689200312070 expired; onlineServers=1 2023-07-12 22:18:35,543 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42757,1689200312255 2023-07-12 22:18:35,544 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:35,545 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42757,1689200312255] 2023-07-12 22:18:35,545 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42757,1689200312255; numProcessing=3 2023-07-12 22:18:35,645 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:35,645 INFO [RS:1;jenkins-hbase4:42757] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42757,1689200312255; zookeeper connection closed. 2023-07-12 22:18:35,645 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:42757-0x1015b9dc12d0002, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:35,646 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@33df509c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@33df509c 2023-07-12 22:18:35,646 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42757,1689200312255 already deleted, retry=false 2023-07-12 22:18:35,647 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42757,1689200312255 expired; onlineServers=0 2023-07-12 22:18:35,647 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39075,1689200311877' ***** 2023-07-12 22:18:35,647 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 22:18:35,649 DEBUG [M:0;jenkins-hbase4:39075] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1bc2c26a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:35,649 INFO [M:0;jenkins-hbase4:39075] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:35,652 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:35,652 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:35,652 INFO [M:0;jenkins-hbase4:39075] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4a620ab{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 22:18:35,652 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:35,655 INFO [M:0;jenkins-hbase4:39075] server.AbstractConnector(383): Stopped ServerConnector@77c386e4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:35,655 INFO [M:0;jenkins-hbase4:39075] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:35,655 INFO [M:0;jenkins-hbase4:39075] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5922bff8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:35,655 INFO [M:0;jenkins-hbase4:39075] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2e75a497{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:35,656 INFO [M:0;jenkins-hbase4:39075] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39075,1689200311877 2023-07-12 22:18:35,656 INFO [M:0;jenkins-hbase4:39075] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39075,1689200311877; all regions closed. 2023-07-12 22:18:35,656 DEBUG [M:0;jenkins-hbase4:39075] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:35,656 INFO [M:0;jenkins-hbase4:39075] master.HMaster(1491): Stopping master jetty server 2023-07-12 22:18:35,657 INFO [M:0;jenkins-hbase4:39075] server.AbstractConnector(383): Stopped ServerConnector@4c2c88b8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:35,657 DEBUG [M:0;jenkins-hbase4:39075] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 22:18:35,657 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 22:18:35,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689200312797] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689200312797,5,FailOnTimeoutGroup] 2023-07-12 22:18:35,657 DEBUG [M:0;jenkins-hbase4:39075] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 22:18:35,658 INFO [M:0;jenkins-hbase4:39075] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 22:18:35,658 INFO [M:0;jenkins-hbase4:39075] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 22:18:35,658 INFO [M:0;jenkins-hbase4:39075] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 22:18:35,658 DEBUG [M:0;jenkins-hbase4:39075] master.HMaster(1512): Stopping service threads 2023-07-12 22:18:35,658 INFO [M:0;jenkins-hbase4:39075] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 22:18:35,658 ERROR [M:0;jenkins-hbase4:39075] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 22:18:35,658 INFO [M:0;jenkins-hbase4:39075] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 22:18:35,658 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 22:18:35,659 DEBUG [M:0;jenkins-hbase4:39075] zookeeper.ZKUtil(398): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 22:18:35,659 WARN [M:0;jenkins-hbase4:39075] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 22:18:35,659 INFO [M:0;jenkins-hbase4:39075] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 22:18:35,660 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689200312796] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689200312796,5,FailOnTimeoutGroup] 2023-07-12 22:18:35,660 INFO [M:0;jenkins-hbase4:39075] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 22:18:35,661 DEBUG [M:0;jenkins-hbase4:39075] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 22:18:35,661 INFO [M:0;jenkins-hbase4:39075] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:35,661 DEBUG [M:0;jenkins-hbase4:39075] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:35,661 DEBUG [M:0;jenkins-hbase4:39075] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 22:18:35,661 DEBUG [M:0;jenkins-hbase4:39075] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:35,661 INFO [M:0;jenkins-hbase4:39075] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.95 KB heapSize=109.11 KB 2023-07-12 22:18:35,683 INFO [M:0;jenkins-hbase4:39075] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.95 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c609f9b731e04700974cd48496bf72cc 2023-07-12 22:18:35,689 DEBUG [M:0;jenkins-hbase4:39075] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c609f9b731e04700974cd48496bf72cc as hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c609f9b731e04700974cd48496bf72cc 2023-07-12 22:18:35,695 INFO [M:0;jenkins-hbase4:39075] regionserver.HStore(1080): Added hdfs://localhost:33559/user/jenkins/test-data/d52b01e8-6db9-17b0-5fd4-bd2fc78eeff6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c609f9b731e04700974cd48496bf72cc, entries=24, sequenceid=194, filesize=12.4 K 2023-07-12 22:18:35,696 INFO [M:0;jenkins-hbase4:39075] regionserver.HRegion(2948): Finished flush of dataSize ~92.95 KB/95185, heapSize ~109.09 KB/111712, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 35ms, sequenceid=194, compaction requested=false 2023-07-12 22:18:35,704 INFO [M:0;jenkins-hbase4:39075] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:35,704 DEBUG [M:0;jenkins-hbase4:39075] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 22:18:35,721 INFO [M:0;jenkins-hbase4:39075] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 22:18:35,721 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:35,722 INFO [M:0;jenkins-hbase4:39075] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39075 2023-07-12 22:18:35,725 DEBUG [M:0;jenkins-hbase4:39075] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,39075,1689200311877 already deleted, retry=false 2023-07-12 22:18:35,881 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:35,881 INFO [M:0;jenkins-hbase4:39075] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39075,1689200311877; zookeeper connection closed. 2023-07-12 22:18:35,881 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): master:39075-0x1015b9dc12d0000, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:35,981 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:35,981 INFO [RS:0;jenkins-hbase4:41315] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41315,1689200312070; zookeeper connection closed. 2023-07-12 22:18:35,981 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:41315-0x1015b9dc12d0001, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:35,981 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@116e616c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@116e616c 2023-07-12 22:18:36,081 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:36,081 INFO [RS:2;jenkins-hbase4:33227] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33227,1689200312415; zookeeper connection closed. 2023-07-12 22:18:36,081 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): regionserver:33227-0x1015b9dc12d0003, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:36,082 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6e8d0329] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6e8d0329 2023-07-12 22:18:36,082 INFO [Listener at localhost/43911] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-12 22:18:36,082 WARN [Listener at localhost/43911] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 22:18:36,087 INFO [Listener at localhost/43911] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 22:18:36,190 WARN [BP-1648872466-172.31.14.131-1689200310922 heartbeating to localhost/127.0.0.1:33559] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 22:18:36,191 WARN [BP-1648872466-172.31.14.131-1689200310922 heartbeating to localhost/127.0.0.1:33559] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1648872466-172.31.14.131-1689200310922 (Datanode Uuid 140807b8-72d4-4169-9915-26353847a3a6) service to localhost/127.0.0.1:33559 2023-07-12 22:18:36,192 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/cluster_bd747c73-1dcf-47ea-28ca-7c9c5b338765/dfs/data/data5/current/BP-1648872466-172.31.14.131-1689200310922] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:36,192 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/cluster_bd747c73-1dcf-47ea-28ca-7c9c5b338765/dfs/data/data6/current/BP-1648872466-172.31.14.131-1689200310922] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:36,195 WARN [Listener at localhost/43911] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 22:18:36,198 INFO [Listener at localhost/43911] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 22:18:36,302 WARN [BP-1648872466-172.31.14.131-1689200310922 heartbeating to localhost/127.0.0.1:33559] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 22:18:36,302 WARN [BP-1648872466-172.31.14.131-1689200310922 heartbeating to localhost/127.0.0.1:33559] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1648872466-172.31.14.131-1689200310922 (Datanode Uuid d08f1db3-63cc-4032-9838-8e7a1398b55c) service to localhost/127.0.0.1:33559 2023-07-12 22:18:36,303 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/cluster_bd747c73-1dcf-47ea-28ca-7c9c5b338765/dfs/data/data3/current/BP-1648872466-172.31.14.131-1689200310922] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:36,304 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/cluster_bd747c73-1dcf-47ea-28ca-7c9c5b338765/dfs/data/data4/current/BP-1648872466-172.31.14.131-1689200310922] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:36,305 WARN [Listener at localhost/43911] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 22:18:36,310 INFO [Listener at localhost/43911] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 22:18:36,413 WARN [BP-1648872466-172.31.14.131-1689200310922 heartbeating to localhost/127.0.0.1:33559] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 22:18:36,413 WARN [BP-1648872466-172.31.14.131-1689200310922 heartbeating to localhost/127.0.0.1:33559] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1648872466-172.31.14.131-1689200310922 (Datanode Uuid 86d76ca0-e194-4967-9c13-2bc54c8e3faa) service to localhost/127.0.0.1:33559 2023-07-12 22:18:36,413 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/cluster_bd747c73-1dcf-47ea-28ca-7c9c5b338765/dfs/data/data1/current/BP-1648872466-172.31.14.131-1689200310922] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:36,414 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/cluster_bd747c73-1dcf-47ea-28ca-7c9c5b338765/dfs/data/data2/current/BP-1648872466-172.31.14.131-1689200310922] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:36,423 INFO [Listener at localhost/43911] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 22:18:36,540 INFO [Listener at localhost/43911] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 22:18:36,570 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-12 22:18:36,570 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 22:18:36,570 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/hadoop.log.dir so I do NOT create it in target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6 2023-07-12 22:18:36,570 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/200bb03b-2f88-d52d-3db7-73f9660cc4ed/hadoop.tmp.dir so I do NOT create it in target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6 2023-07-12 22:18:36,570 INFO [Listener at localhost/43911] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5, deleteOnExit=true 2023-07-12 22:18:36,570 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 22:18:36,570 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/test.cache.data in system properties and HBase conf 2023-07-12 22:18:36,570 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 22:18:36,570 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/hadoop.log.dir in system properties and HBase conf 2023-07-12 22:18:36,571 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 22:18:36,571 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 22:18:36,571 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 22:18:36,571 DEBUG [Listener at localhost/43911] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 22:18:36,571 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 22:18:36,571 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 22:18:36,571 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 22:18:36,571 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 22:18:36,572 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 22:18:36,572 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 22:18:36,572 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 22:18:36,572 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 22:18:36,572 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 22:18:36,572 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/nfs.dump.dir in system properties and HBase conf 2023-07-12 22:18:36,572 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/java.io.tmpdir in system properties and HBase conf 2023-07-12 22:18:36,572 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 22:18:36,572 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 22:18:36,572 INFO [Listener at localhost/43911] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 22:18:36,576 WARN [Listener at localhost/43911] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 22:18:36,577 WARN [Listener at localhost/43911] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 22:18:36,621 WARN [Listener at localhost/43911] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 22:18:36,623 INFO [Listener at localhost/43911] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 22:18:36,627 INFO [Listener at localhost/43911] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/java.io.tmpdir/Jetty_localhost_44733_hdfs____trh4ox/webapp 2023-07-12 22:18:36,638 DEBUG [Listener at localhost/43911-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1015b9dc12d000a, quorum=127.0.0.1:54162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-12 22:18:36,638 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1015b9dc12d000a, quorum=127.0.0.1:54162, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-12 22:18:36,720 INFO [Listener at localhost/43911] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44733 2023-07-12 22:18:36,724 WARN [Listener at localhost/43911] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 22:18:36,724 WARN [Listener at localhost/43911] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 22:18:36,769 WARN [Listener at localhost/34687] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 22:18:36,790 WARN [Listener at localhost/34687] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 22:18:36,795 WARN [Listener at localhost/34687] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 22:18:36,796 INFO [Listener at localhost/34687] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 22:18:36,801 INFO [Listener at localhost/34687] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/java.io.tmpdir/Jetty_localhost_46647_datanode____.qd9tsy/webapp 2023-07-12 22:18:36,898 INFO [Listener at localhost/34687] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46647 2023-07-12 22:18:36,908 WARN [Listener at localhost/40511] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 22:18:36,926 WARN [Listener at localhost/40511] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 22:18:36,928 WARN [Listener at localhost/40511] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 22:18:36,929 INFO [Listener at localhost/40511] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 22:18:36,938 INFO [Listener at localhost/40511] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/java.io.tmpdir/Jetty_localhost_40387_datanode____.cx2d2b/webapp 2023-07-12 22:18:37,023 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x78fc53544cd07440: Processing first storage report for DS-fa475831-c6d0-4e59-b8f1-f41fed357e55 from datanode 9c10f582-c84a-420a-a412-7bcf0c59e88a 2023-07-12 22:18:37,023 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x78fc53544cd07440: from storage DS-fa475831-c6d0-4e59-b8f1-f41fed357e55 node DatanodeRegistration(127.0.0.1:41365, datanodeUuid=9c10f582-c84a-420a-a412-7bcf0c59e88a, infoPort=34197, infoSecurePort=0, ipcPort=40511, storageInfo=lv=-57;cid=testClusterID;nsid=1693785729;c=1689200316579), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 22:18:37,024 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x78fc53544cd07440: Processing first storage report for DS-9f2a30f0-70b8-4126-b6c6-ac4646873ead from datanode 9c10f582-c84a-420a-a412-7bcf0c59e88a 2023-07-12 22:18:37,024 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x78fc53544cd07440: from storage DS-9f2a30f0-70b8-4126-b6c6-ac4646873ead node DatanodeRegistration(127.0.0.1:41365, datanodeUuid=9c10f582-c84a-420a-a412-7bcf0c59e88a, infoPort=34197, infoSecurePort=0, ipcPort=40511, storageInfo=lv=-57;cid=testClusterID;nsid=1693785729;c=1689200316579), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 22:18:37,040 INFO [Listener at localhost/40511] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40387 2023-07-12 22:18:37,050 WARN [Listener at localhost/34299] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 22:18:37,069 WARN [Listener at localhost/34299] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 22:18:37,071 WARN [Listener at localhost/34299] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 22:18:37,072 INFO [Listener at localhost/34299] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 22:18:37,075 INFO [Listener at localhost/34299] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/java.io.tmpdir/Jetty_localhost_45267_datanode____7oos3d/webapp 2023-07-12 22:18:37,164 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x716d5b7cce43f167: Processing first storage report for DS-9f007403-0901-431e-b6d9-15ba576dd9b5 from datanode 4df045c7-9cc3-4b0f-9717-17f374307a32 2023-07-12 22:18:37,164 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x716d5b7cce43f167: from storage DS-9f007403-0901-431e-b6d9-15ba576dd9b5 node DatanodeRegistration(127.0.0.1:44885, datanodeUuid=4df045c7-9cc3-4b0f-9717-17f374307a32, infoPort=39933, infoSecurePort=0, ipcPort=34299, storageInfo=lv=-57;cid=testClusterID;nsid=1693785729;c=1689200316579), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 22:18:37,164 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x716d5b7cce43f167: Processing first storage report for DS-4e05edd7-3424-44ca-a4b2-8203adcf9abb from datanode 4df045c7-9cc3-4b0f-9717-17f374307a32 2023-07-12 22:18:37,164 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x716d5b7cce43f167: from storage DS-4e05edd7-3424-44ca-a4b2-8203adcf9abb node DatanodeRegistration(127.0.0.1:44885, datanodeUuid=4df045c7-9cc3-4b0f-9717-17f374307a32, infoPort=39933, infoSecurePort=0, ipcPort=34299, storageInfo=lv=-57;cid=testClusterID;nsid=1693785729;c=1689200316579), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 22:18:37,181 INFO [Listener at localhost/34299] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45267 2023-07-12 22:18:37,190 WARN [Listener at localhost/36883] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 22:18:37,306 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7597c4c5b6006c8d: Processing first storage report for DS-b579477e-893c-4f09-8805-bfe8e9b2b85c from datanode dc27d2d0-dff6-433e-93b9-4396c3a3ddce 2023-07-12 22:18:37,306 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7597c4c5b6006c8d: from storage DS-b579477e-893c-4f09-8805-bfe8e9b2b85c node DatanodeRegistration(127.0.0.1:38483, datanodeUuid=dc27d2d0-dff6-433e-93b9-4396c3a3ddce, infoPort=39391, infoSecurePort=0, ipcPort=36883, storageInfo=lv=-57;cid=testClusterID;nsid=1693785729;c=1689200316579), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 22:18:37,306 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7597c4c5b6006c8d: Processing first storage report for DS-d8ae505a-76d3-4684-b624-b429ec17bed9 from datanode dc27d2d0-dff6-433e-93b9-4396c3a3ddce 2023-07-12 22:18:37,306 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7597c4c5b6006c8d: from storage DS-d8ae505a-76d3-4684-b624-b429ec17bed9 node DatanodeRegistration(127.0.0.1:38483, datanodeUuid=dc27d2d0-dff6-433e-93b9-4396c3a3ddce, infoPort=39391, infoSecurePort=0, ipcPort=36883, storageInfo=lv=-57;cid=testClusterID;nsid=1693785729;c=1689200316579), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 22:18:37,323 DEBUG [Listener at localhost/36883] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6 2023-07-12 22:18:37,325 INFO [Listener at localhost/36883] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/zookeeper_0, clientPort=61599, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 22:18:37,326 INFO [Listener at localhost/36883] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61599 2023-07-12 22:18:37,326 INFO [Listener at localhost/36883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:37,327 INFO [Listener at localhost/36883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:37,342 INFO [Listener at localhost/36883] util.FSUtils(471): Created version file at hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3 with version=8 2023-07-12 22:18:37,342 INFO [Listener at localhost/36883] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40075/user/jenkins/test-data/81e20b3f-8793-7044-f7c9-5f66f15a4105/hbase-staging 2023-07-12 22:18:37,343 DEBUG [Listener at localhost/36883] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 22:18:37,343 DEBUG [Listener at localhost/36883] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 22:18:37,343 DEBUG [Listener at localhost/36883] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 22:18:37,343 DEBUG [Listener at localhost/36883] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 22:18:37,344 INFO [Listener at localhost/36883] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:37,344 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:37,344 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:37,344 INFO [Listener at localhost/36883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:37,344 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:37,344 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:37,344 INFO [Listener at localhost/36883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:37,345 INFO [Listener at localhost/36883] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35207 2023-07-12 22:18:37,345 INFO [Listener at localhost/36883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:37,346 INFO [Listener at localhost/36883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:37,347 INFO [Listener at localhost/36883] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35207 connecting to ZooKeeper ensemble=127.0.0.1:61599 2023-07-12 22:18:37,356 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:352070x0, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:37,360 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35207-0x1015b9dd68f0000 connected 2023-07-12 22:18:37,382 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(164): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:37,383 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(164): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:37,383 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(164): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:37,386 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35207 2023-07-12 22:18:37,387 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35207 2023-07-12 22:18:37,390 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35207 2023-07-12 22:18:37,394 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35207 2023-07-12 22:18:37,395 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35207 2023-07-12 22:18:37,397 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:37,397 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:37,397 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:37,398 INFO [Listener at localhost/36883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 22:18:37,398 INFO [Listener at localhost/36883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:37,398 INFO [Listener at localhost/36883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:37,399 INFO [Listener at localhost/36883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:37,399 INFO [Listener at localhost/36883] http.HttpServer(1146): Jetty bound to port 37621 2023-07-12 22:18:37,399 INFO [Listener at localhost/36883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:37,401 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:37,401 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@32bc0174{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:37,401 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:37,402 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@288b05aa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:37,527 INFO [Listener at localhost/36883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:37,528 INFO [Listener at localhost/36883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:37,528 INFO [Listener at localhost/36883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:37,528 INFO [Listener at localhost/36883] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 22:18:37,530 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:37,532 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@75701f75{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/java.io.tmpdir/jetty-0_0_0_0-37621-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8093629261326879849/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 22:18:37,533 INFO [Listener at localhost/36883] server.AbstractConnector(333): Started ServerConnector@75842d05{HTTP/1.1, (http/1.1)}{0.0.0.0:37621} 2023-07-12 22:18:37,533 INFO [Listener at localhost/36883] server.Server(415): Started @42606ms 2023-07-12 22:18:37,533 INFO [Listener at localhost/36883] master.HMaster(444): hbase.rootdir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3, hbase.cluster.distributed=false 2023-07-12 22:18:37,552 INFO [Listener at localhost/36883] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:37,553 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:37,553 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:37,553 INFO [Listener at localhost/36883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:37,553 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:37,553 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:37,553 INFO [Listener at localhost/36883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:37,554 INFO [Listener at localhost/36883] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46711 2023-07-12 22:18:37,555 INFO [Listener at localhost/36883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 22:18:37,556 DEBUG [Listener at localhost/36883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 22:18:37,556 INFO [Listener at localhost/36883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:37,558 INFO [Listener at localhost/36883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:37,559 INFO [Listener at localhost/36883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46711 connecting to ZooKeeper ensemble=127.0.0.1:61599 2023-07-12 22:18:37,563 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:467110x0, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:37,565 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46711-0x1015b9dd68f0001 connected 2023-07-12 22:18:37,565 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(164): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:37,566 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(164): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:37,566 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(164): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:37,567 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46711 2023-07-12 22:18:37,567 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46711 2023-07-12 22:18:37,570 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46711 2023-07-12 22:18:37,571 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46711 2023-07-12 22:18:37,574 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46711 2023-07-12 22:18:37,575 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:37,576 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:37,576 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:37,576 INFO [Listener at localhost/36883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 22:18:37,576 INFO [Listener at localhost/36883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:37,576 INFO [Listener at localhost/36883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:37,576 INFO [Listener at localhost/36883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:37,577 INFO [Listener at localhost/36883] http.HttpServer(1146): Jetty bound to port 45293 2023-07-12 22:18:37,577 INFO [Listener at localhost/36883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:37,579 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:37,579 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@68a51037{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:37,580 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:37,580 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7437223a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:37,699 INFO [Listener at localhost/36883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:37,700 INFO [Listener at localhost/36883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:37,700 INFO [Listener at localhost/36883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:37,700 INFO [Listener at localhost/36883] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 22:18:37,701 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:37,702 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@45896d80{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/java.io.tmpdir/jetty-0_0_0_0-45293-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2642923616873988683/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:37,703 INFO [Listener at localhost/36883] server.AbstractConnector(333): Started ServerConnector@1cfaac5f{HTTP/1.1, (http/1.1)}{0.0.0.0:45293} 2023-07-12 22:18:37,703 INFO [Listener at localhost/36883] server.Server(415): Started @42776ms 2023-07-12 22:18:37,714 INFO [Listener at localhost/36883] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:37,714 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:37,715 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:37,715 INFO [Listener at localhost/36883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:37,715 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:37,715 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:37,715 INFO [Listener at localhost/36883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:37,716 INFO [Listener at localhost/36883] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38421 2023-07-12 22:18:37,716 INFO [Listener at localhost/36883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 22:18:37,717 DEBUG [Listener at localhost/36883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 22:18:37,718 INFO [Listener at localhost/36883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:37,718 INFO [Listener at localhost/36883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:37,719 INFO [Listener at localhost/36883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38421 connecting to ZooKeeper ensemble=127.0.0.1:61599 2023-07-12 22:18:37,723 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:384210x0, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:37,724 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38421-0x1015b9dd68f0002 connected 2023-07-12 22:18:37,724 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(164): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:37,725 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(164): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:37,725 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(164): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:37,726 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38421 2023-07-12 22:18:37,727 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38421 2023-07-12 22:18:37,730 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38421 2023-07-12 22:18:37,730 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38421 2023-07-12 22:18:37,730 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38421 2023-07-12 22:18:37,732 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:37,732 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:37,732 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:37,732 INFO [Listener at localhost/36883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 22:18:37,732 INFO [Listener at localhost/36883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:37,732 INFO [Listener at localhost/36883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:37,733 INFO [Listener at localhost/36883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:37,733 INFO [Listener at localhost/36883] http.HttpServer(1146): Jetty bound to port 41381 2023-07-12 22:18:37,733 INFO [Listener at localhost/36883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:37,737 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:37,737 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@57997e99{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:37,738 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:37,738 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5213eb6a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:37,864 INFO [Listener at localhost/36883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:37,865 INFO [Listener at localhost/36883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:37,865 INFO [Listener at localhost/36883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:37,866 INFO [Listener at localhost/36883] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 22:18:37,867 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:37,868 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@39303d70{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/java.io.tmpdir/jetty-0_0_0_0-41381-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7894001159945792004/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:37,870 INFO [Listener at localhost/36883] server.AbstractConnector(333): Started ServerConnector@2a64ec80{HTTP/1.1, (http/1.1)}{0.0.0.0:41381} 2023-07-12 22:18:37,870 INFO [Listener at localhost/36883] server.Server(415): Started @42943ms 2023-07-12 22:18:37,883 INFO [Listener at localhost/36883] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:37,883 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:37,883 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:37,883 INFO [Listener at localhost/36883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:37,883 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:37,884 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:37,884 INFO [Listener at localhost/36883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:37,884 INFO [Listener at localhost/36883] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43539 2023-07-12 22:18:37,885 INFO [Listener at localhost/36883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 22:18:37,886 DEBUG [Listener at localhost/36883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 22:18:37,887 INFO [Listener at localhost/36883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:37,888 INFO [Listener at localhost/36883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:37,889 INFO [Listener at localhost/36883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43539 connecting to ZooKeeper ensemble=127.0.0.1:61599 2023-07-12 22:18:37,893 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:435390x0, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:37,894 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(164): regionserver:435390x0, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:37,895 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43539-0x1015b9dd68f0003 connected 2023-07-12 22:18:37,895 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(164): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:37,895 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(164): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:37,896 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43539 2023-07-12 22:18:37,897 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43539 2023-07-12 22:18:37,897 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43539 2023-07-12 22:18:37,898 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43539 2023-07-12 22:18:37,899 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43539 2023-07-12 22:18:37,900 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:37,900 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:37,900 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:37,901 INFO [Listener at localhost/36883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 22:18:37,901 INFO [Listener at localhost/36883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:37,901 INFO [Listener at localhost/36883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:37,901 INFO [Listener at localhost/36883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:37,902 INFO [Listener at localhost/36883] http.HttpServer(1146): Jetty bound to port 41009 2023-07-12 22:18:37,902 INFO [Listener at localhost/36883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:37,908 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:37,908 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@399f6210{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:37,908 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:37,908 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@518030b3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:38,022 INFO [Listener at localhost/36883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:38,023 INFO [Listener at localhost/36883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:38,023 INFO [Listener at localhost/36883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:38,023 INFO [Listener at localhost/36883] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 22:18:38,024 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:38,024 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@239401c3{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/java.io.tmpdir/jetty-0_0_0_0-41009-hbase-server-2_4_18-SNAPSHOT_jar-_-any-195973278567654872/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:38,026 INFO [Listener at localhost/36883] server.AbstractConnector(333): Started ServerConnector@40742411{HTTP/1.1, (http/1.1)}{0.0.0.0:41009} 2023-07-12 22:18:38,026 INFO [Listener at localhost/36883] server.Server(415): Started @43099ms 2023-07-12 22:18:38,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:38,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@1fac623c{HTTP/1.1, (http/1.1)}{0.0.0.0:35925} 2023-07-12 22:18:38,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @43104ms 2023-07-12 22:18:38,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,35207,1689200317344 2023-07-12 22:18:38,033 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 22:18:38,033 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,35207,1689200317344 2023-07-12 22:18:38,035 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:38,035 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:38,035 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:38,035 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:38,036 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:38,038 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 22:18:38,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,35207,1689200317344 from backup master directory 2023-07-12 22:18:38,039 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 22:18:38,040 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,35207,1689200317344 2023-07-12 22:18:38,040 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:38,040 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,35207,1689200317344 2023-07-12 22:18:38,040 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 22:18:38,058 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/hbase.id with ID: f6676a37-8259-469d-b369-c5d9d72f0308 2023-07-12 22:18:38,068 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:38,071 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:38,083 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x67ea00fe to 127.0.0.1:61599 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:38,093 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e13c30a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:38,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:38,094 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 22:18:38,094 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:38,095 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData/data/master/store-tmp 2023-07-12 22:18:38,106 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:38,106 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 22:18:38,106 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:38,106 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:38,106 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 22:18:38,106 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:38,106 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:38,106 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 22:18:38,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData/WALs/jenkins-hbase4.apache.org,35207,1689200317344 2023-07-12 22:18:38,109 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35207%2C1689200317344, suffix=, logDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData/WALs/jenkins-hbase4.apache.org,35207,1689200317344, archiveDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData/oldWALs, maxLogs=10 2023-07-12 22:18:38,124 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44885,DS-9f007403-0901-431e-b6d9-15ba576dd9b5,DISK] 2023-07-12 22:18:38,131 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41365,DS-fa475831-c6d0-4e59-b8f1-f41fed357e55,DISK] 2023-07-12 22:18:38,131 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38483,DS-b579477e-893c-4f09-8805-bfe8e9b2b85c,DISK] 2023-07-12 22:18:38,134 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData/WALs/jenkins-hbase4.apache.org,35207,1689200317344/jenkins-hbase4.apache.org%2C35207%2C1689200317344.1689200318109 2023-07-12 22:18:38,135 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44885,DS-9f007403-0901-431e-b6d9-15ba576dd9b5,DISK], DatanodeInfoWithStorage[127.0.0.1:41365,DS-fa475831-c6d0-4e59-b8f1-f41fed357e55,DISK], DatanodeInfoWithStorage[127.0.0.1:38483,DS-b579477e-893c-4f09-8805-bfe8e9b2b85c,DISK]] 2023-07-12 22:18:38,135 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:38,135 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:38,136 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:38,136 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:38,138 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:38,139 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 22:18:38,139 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 22:18:38,140 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:38,141 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:38,141 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:38,143 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 22:18:38,147 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:38,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11945926560, jitterRate=0.11255110800266266}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:38,147 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 22:18:38,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 22:18:38,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 22:18:38,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 22:18:38,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 22:18:38,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 22:18:38,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-12 22:18:38,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 22:18:38,150 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 22:18:38,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 22:18:38,152 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 22:18:38,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 22:18:38,153 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 22:18:38,156 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:38,157 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 22:18:38,157 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 22:18:38,158 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 22:18:38,159 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:38,159 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:38,159 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:38,160 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:38,160 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:38,162 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,35207,1689200317344, sessionid=0x1015b9dd68f0000, setting cluster-up flag (Was=false) 2023-07-12 22:18:38,165 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:38,170 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 22:18:38,171 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35207,1689200317344 2023-07-12 22:18:38,173 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:38,177 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 22:18:38,178 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35207,1689200317344 2023-07-12 22:18:38,179 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.hbase-snapshot/.tmp 2023-07-12 22:18:38,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 22:18:38,180 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 22:18:38,180 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 22:18:38,181 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:38,181 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 22:18:38,182 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 22:18:38,248 INFO [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(951): ClusterId : f6676a37-8259-469d-b369-c5d9d72f0308 2023-07-12 22:18:38,249 INFO [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(951): ClusterId : f6676a37-8259-469d-b369-c5d9d72f0308 2023-07-12 22:18:38,252 DEBUG [RS:1;jenkins-hbase4:38421] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 22:18:38,252 DEBUG [RS:0;jenkins-hbase4:46711] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 22:18:38,256 INFO [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(951): ClusterId : f6676a37-8259-469d-b369-c5d9d72f0308 2023-07-12 22:18:38,258 DEBUG [RS:1;jenkins-hbase4:38421] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 22:18:38,258 DEBUG [RS:0;jenkins-hbase4:46711] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 22:18:38,258 DEBUG [RS:1;jenkins-hbase4:38421] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 22:18:38,258 DEBUG [RS:0;jenkins-hbase4:46711] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 22:18:38,260 DEBUG [RS:1;jenkins-hbase4:38421] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 22:18:38,260 DEBUG [RS:0;jenkins-hbase4:46711] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 22:18:38,263 DEBUG [RS:2;jenkins-hbase4:43539] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 22:18:38,264 DEBUG [RS:0;jenkins-hbase4:46711] zookeeper.ReadOnlyZKClient(139): Connect 0x2f42a090 to 127.0.0.1:61599 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:38,266 DEBUG [RS:1;jenkins-hbase4:38421] zookeeper.ReadOnlyZKClient(139): Connect 0x1ed9f5b1 to 127.0.0.1:61599 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:38,273 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 22:18:38,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 22:18:38,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 22:18:38,274 DEBUG [RS:2;jenkins-hbase4:43539] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 22:18:38,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 22:18:38,274 DEBUG [RS:2;jenkins-hbase4:43539] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 22:18:38,274 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 22:18:38,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 22:18:38,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 22:18:38,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 22:18:38,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-12 22:18:38,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:38,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,277 DEBUG [RS:2;jenkins-hbase4:43539] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 22:18:38,279 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689200348279 2023-07-12 22:18:38,280 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 22:18:38,280 DEBUG [RS:0;jenkins-hbase4:46711] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@21854b08, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:38,280 DEBUG [RS:2;jenkins-hbase4:43539] zookeeper.ReadOnlyZKClient(139): Connect 0x09e3e8ed to 127.0.0.1:61599 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:38,280 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 22:18:38,280 DEBUG [RS:1;jenkins-hbase4:38421] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@57ed604f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:38,281 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 22:18:38,280 DEBUG [RS:0;jenkins-hbase4:46711] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25bacbe0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:38,281 DEBUG [RS:1;jenkins-hbase4:38421] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f0dcd97, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:38,281 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 22:18:38,281 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 22:18:38,281 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 22:18:38,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,287 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 22:18:38,287 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 22:18:38,287 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 22:18:38,287 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 22:18:38,287 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 22:18:38,288 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:38,291 DEBUG [RS:1;jenkins-hbase4:38421] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:38421 2023-07-12 22:18:38,291 INFO [RS:1;jenkins-hbase4:38421] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 22:18:38,291 INFO [RS:1;jenkins-hbase4:38421] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 22:18:38,291 DEBUG [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 22:18:38,292 INFO [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35207,1689200317344 with isa=jenkins-hbase4.apache.org/172.31.14.131:38421, startcode=1689200317714 2023-07-12 22:18:38,292 DEBUG [RS:1;jenkins-hbase4:38421] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 22:18:38,292 DEBUG [RS:0;jenkins-hbase4:46711] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46711 2023-07-12 22:18:38,292 INFO [RS:0;jenkins-hbase4:46711] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 22:18:38,292 INFO [RS:0;jenkins-hbase4:46711] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 22:18:38,292 DEBUG [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 22:18:38,292 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 22:18:38,292 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 22:18:38,293 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689200318293,5,FailOnTimeoutGroup] 2023-07-12 22:18:38,293 INFO [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35207,1689200317344 with isa=jenkins-hbase4.apache.org/172.31.14.131:46711, startcode=1689200317552 2023-07-12 22:18:38,293 DEBUG [RS:0;jenkins-hbase4:46711] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 22:18:38,294 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689200318293,5,FailOnTimeoutGroup] 2023-07-12 22:18:38,294 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 22:18:38,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,297 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38141, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 22:18:38,302 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35207] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:38,302 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:38,305 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 22:18:38,305 DEBUG [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3 2023-07-12 22:18:38,305 DEBUG [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34687 2023-07-12 22:18:38,305 DEBUG [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37621 2023-07-12 22:18:38,305 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60239, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 22:18:38,305 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35207] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:38,305 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:38,306 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 22:18:38,306 DEBUG [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3 2023-07-12 22:18:38,306 DEBUG [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34687 2023-07-12 22:18:38,306 DEBUG [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37621 2023-07-12 22:18:38,308 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:38,314 DEBUG [RS:1;jenkins-hbase4:38421] zookeeper.ZKUtil(162): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:38,314 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46711,1689200317552] 2023-07-12 22:18:38,314 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38421,1689200317714] 2023-07-12 22:18:38,314 WARN [RS:1;jenkins-hbase4:38421] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:38,314 DEBUG [RS:2;jenkins-hbase4:43539] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@72417a1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:38,314 INFO [RS:1;jenkins-hbase4:38421] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:38,314 DEBUG [RS:0;jenkins-hbase4:46711] zookeeper.ZKUtil(162): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:38,314 DEBUG [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:38,314 DEBUG [RS:2;jenkins-hbase4:43539] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31b839d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:38,314 WARN [RS:0;jenkins-hbase4:46711] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:38,315 INFO [RS:0;jenkins-hbase4:46711] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:38,315 DEBUG [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:38,326 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:38,326 DEBUG [RS:1;jenkins-hbase4:38421] zookeeper.ZKUtil(162): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:38,326 DEBUG [RS:0;jenkins-hbase4:46711] zookeeper.ZKUtil(162): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:38,326 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:38,327 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3 2023-07-12 22:18:38,327 DEBUG [RS:0;jenkins-hbase4:46711] zookeeper.ZKUtil(162): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:38,327 DEBUG [RS:1;jenkins-hbase4:38421] zookeeper.ZKUtil(162): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:38,328 DEBUG [RS:0;jenkins-hbase4:46711] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 22:18:38,328 INFO [RS:0;jenkins-hbase4:46711] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 22:18:38,330 INFO [RS:0;jenkins-hbase4:46711] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 22:18:38,330 DEBUG [RS:1;jenkins-hbase4:38421] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 22:18:38,330 DEBUG [RS:2;jenkins-hbase4:43539] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:43539 2023-07-12 22:18:38,330 INFO [RS:1;jenkins-hbase4:38421] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 22:18:38,330 INFO [RS:2;jenkins-hbase4:43539] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 22:18:38,330 INFO [RS:2;jenkins-hbase4:43539] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 22:18:38,330 DEBUG [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 22:18:38,331 INFO [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35207,1689200317344 with isa=jenkins-hbase4.apache.org/172.31.14.131:43539, startcode=1689200317882 2023-07-12 22:18:38,331 DEBUG [RS:2;jenkins-hbase4:43539] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 22:18:38,333 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33265, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 22:18:38,333 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35207] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:38,333 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:38,333 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 22:18:38,333 DEBUG [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3 2023-07-12 22:18:38,333 DEBUG [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34687 2023-07-12 22:18:38,333 DEBUG [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37621 2023-07-12 22:18:38,335 INFO [RS:0;jenkins-hbase4:46711] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 22:18:38,335 INFO [RS:0;jenkins-hbase4:46711] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,335 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:38,335 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:38,335 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:38,335 DEBUG [RS:2;jenkins-hbase4:43539] zookeeper.ZKUtil(162): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:38,335 WARN [RS:2;jenkins-hbase4:43539] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:38,335 INFO [RS:2;jenkins-hbase4:43539] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:38,335 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:38,336 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:38,336 DEBUG [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:38,336 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:38,336 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:38,336 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:38,336 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:38,336 INFO [RS:1;jenkins-hbase4:38421] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 22:18:38,337 INFO [RS:1;jenkins-hbase4:38421] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 22:18:38,337 INFO [RS:1;jenkins-hbase4:38421] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,337 INFO [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 22:18:38,337 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43539,1689200317882] 2023-07-12 22:18:38,343 INFO [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 22:18:38,349 INFO [RS:0;jenkins-hbase4:46711] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,349 DEBUG [RS:0;jenkins-hbase4:46711] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,349 INFO [RS:1;jenkins-hbase4:38421] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,349 DEBUG [RS:0;jenkins-hbase4:46711] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,349 DEBUG [RS:1;jenkins-hbase4:38421] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,349 DEBUG [RS:0;jenkins-hbase4:46711] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,349 DEBUG [RS:1;jenkins-hbase4:38421] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,350 DEBUG [RS:0;jenkins-hbase4:46711] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,350 DEBUG [RS:1;jenkins-hbase4:38421] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,350 DEBUG [RS:0;jenkins-hbase4:46711] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,350 DEBUG [RS:1;jenkins-hbase4:38421] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,350 DEBUG [RS:0;jenkins-hbase4:46711] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:38,350 DEBUG [RS:1;jenkins-hbase4:38421] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,350 DEBUG [RS:0;jenkins-hbase4:46711] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,350 DEBUG [RS:1;jenkins-hbase4:38421] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:38,351 DEBUG [RS:0;jenkins-hbase4:46711] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,351 DEBUG [RS:1;jenkins-hbase4:38421] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,351 DEBUG [RS:0;jenkins-hbase4:46711] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,351 DEBUG [RS:1;jenkins-hbase4:38421] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,351 DEBUG [RS:0;jenkins-hbase4:46711] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,351 DEBUG [RS:1;jenkins-hbase4:38421] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,351 DEBUG [RS:1;jenkins-hbase4:38421] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,359 INFO [RS:1;jenkins-hbase4:38421] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,359 INFO [RS:1;jenkins-hbase4:38421] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,359 INFO [RS:1;jenkins-hbase4:38421] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,360 INFO [RS:0;jenkins-hbase4:46711] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,360 INFO [RS:0;jenkins-hbase4:46711] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,360 INFO [RS:0;jenkins-hbase4:46711] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,361 DEBUG [RS:2;jenkins-hbase4:43539] zookeeper.ZKUtil(162): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:38,361 DEBUG [RS:2;jenkins-hbase4:43539] zookeeper.ZKUtil(162): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:38,362 DEBUG [RS:2;jenkins-hbase4:43539] zookeeper.ZKUtil(162): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:38,362 DEBUG [RS:2;jenkins-hbase4:43539] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 22:18:38,363 INFO [RS:2;jenkins-hbase4:43539] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 22:18:38,374 INFO [RS:2;jenkins-hbase4:43539] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 22:18:38,374 INFO [RS:1;jenkins-hbase4:38421] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 22:18:38,375 INFO [RS:1;jenkins-hbase4:38421] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38421,1689200317714-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,375 INFO [RS:2;jenkins-hbase4:43539] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 22:18:38,375 INFO [RS:2;jenkins-hbase4:43539] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,375 INFO [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 22:18:38,377 INFO [RS:2;jenkins-hbase4:43539] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,378 DEBUG [RS:2;jenkins-hbase4:43539] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,378 DEBUG [RS:2;jenkins-hbase4:43539] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,378 DEBUG [RS:2;jenkins-hbase4:43539] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,378 INFO [RS:0;jenkins-hbase4:46711] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 22:18:38,379 DEBUG [RS:2;jenkins-hbase4:43539] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,379 INFO [RS:0;jenkins-hbase4:46711] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46711,1689200317552-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,379 DEBUG [RS:2;jenkins-hbase4:43539] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,383 DEBUG [RS:2;jenkins-hbase4:43539] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:38,383 DEBUG [RS:2;jenkins-hbase4:43539] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,383 DEBUG [RS:2;jenkins-hbase4:43539] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,383 DEBUG [RS:2;jenkins-hbase4:43539] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,383 DEBUG [RS:2;jenkins-hbase4:43539] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:38,388 INFO [RS:2;jenkins-hbase4:43539] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,388 INFO [RS:2;jenkins-hbase4:43539] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,388 INFO [RS:2;jenkins-hbase4:43539] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,390 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:38,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 22:18:38,399 INFO [RS:0;jenkins-hbase4:46711] regionserver.Replication(203): jenkins-hbase4.apache.org,46711,1689200317552 started 2023-07-12 22:18:38,399 INFO [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46711,1689200317552, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46711, sessionid=0x1015b9dd68f0001 2023-07-12 22:18:38,399 DEBUG [RS:0;jenkins-hbase4:46711] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 22:18:38,399 DEBUG [RS:0;jenkins-hbase4:46711] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:38,399 DEBUG [RS:0;jenkins-hbase4:46711] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46711,1689200317552' 2023-07-12 22:18:38,399 DEBUG [RS:0;jenkins-hbase4:46711] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 22:18:38,399 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/info 2023-07-12 22:18:38,399 INFO [RS:1;jenkins-hbase4:38421] regionserver.Replication(203): jenkins-hbase4.apache.org,38421,1689200317714 started 2023-07-12 22:18:38,399 INFO [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38421,1689200317714, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38421, sessionid=0x1015b9dd68f0002 2023-07-12 22:18:38,400 DEBUG [RS:0;jenkins-hbase4:46711] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 22:18:38,400 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 22:18:38,400 DEBUG [RS:1;jenkins-hbase4:38421] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 22:18:38,400 DEBUG [RS:1;jenkins-hbase4:38421] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:38,400 DEBUG [RS:1;jenkins-hbase4:38421] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38421,1689200317714' 2023-07-12 22:18:38,400 DEBUG [RS:1;jenkins-hbase4:38421] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 22:18:38,400 DEBUG [RS:0;jenkins-hbase4:46711] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 22:18:38,400 DEBUG [RS:0;jenkins-hbase4:46711] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 22:18:38,400 DEBUG [RS:0;jenkins-hbase4:46711] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:38,400 DEBUG [RS:0;jenkins-hbase4:46711] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46711,1689200317552' 2023-07-12 22:18:38,400 DEBUG [RS:0;jenkins-hbase4:46711] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 22:18:38,400 DEBUG [RS:1;jenkins-hbase4:38421] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 22:18:38,401 DEBUG [RS:0;jenkins-hbase4:46711] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 22:18:38,401 DEBUG [RS:1;jenkins-hbase4:38421] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 22:18:38,401 DEBUG [RS:1;jenkins-hbase4:38421] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 22:18:38,401 DEBUG [RS:1;jenkins-hbase4:38421] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:38,401 DEBUG [RS:1;jenkins-hbase4:38421] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38421,1689200317714' 2023-07-12 22:18:38,401 DEBUG [RS:1;jenkins-hbase4:38421] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 22:18:38,401 DEBUG [RS:0;jenkins-hbase4:46711] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 22:18:38,401 INFO [RS:0;jenkins-hbase4:46711] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 22:18:38,401 DEBUG [RS:1;jenkins-hbase4:38421] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 22:18:38,401 INFO [RS:0;jenkins-hbase4:46711] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 22:18:38,402 DEBUG [RS:1;jenkins-hbase4:38421] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 22:18:38,402 INFO [RS:1;jenkins-hbase4:38421] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 22:18:38,402 INFO [RS:1;jenkins-hbase4:38421] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 22:18:38,402 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:38,402 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 22:18:38,403 INFO [RS:2;jenkins-hbase4:43539] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 22:18:38,403 INFO [RS:2;jenkins-hbase4:43539] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43539,1689200317882-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,404 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/rep_barrier 2023-07-12 22:18:38,404 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 22:18:38,405 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:38,405 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 22:18:38,406 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/table 2023-07-12 22:18:38,407 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 22:18:38,407 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:38,411 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740 2023-07-12 22:18:38,412 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740 2023-07-12 22:18:38,414 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 22:18:38,415 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 22:18:38,418 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:38,418 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11772242400, jitterRate=0.09637551009654999}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 22:18:38,418 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 22:18:38,418 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 22:18:38,418 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 22:18:38,418 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 22:18:38,418 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 22:18:38,418 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 22:18:38,419 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 22:18:38,419 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 22:18:38,420 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 22:18:38,420 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 22:18:38,420 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 22:18:38,421 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 22:18:38,423 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 22:18:38,423 INFO [RS:2;jenkins-hbase4:43539] regionserver.Replication(203): jenkins-hbase4.apache.org,43539,1689200317882 started 2023-07-12 22:18:38,423 INFO [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43539,1689200317882, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43539, sessionid=0x1015b9dd68f0003 2023-07-12 22:18:38,423 DEBUG [RS:2;jenkins-hbase4:43539] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 22:18:38,423 DEBUG [RS:2;jenkins-hbase4:43539] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:38,423 DEBUG [RS:2;jenkins-hbase4:43539] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43539,1689200317882' 2023-07-12 22:18:38,423 DEBUG [RS:2;jenkins-hbase4:43539] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 22:18:38,424 DEBUG [RS:2;jenkins-hbase4:43539] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 22:18:38,424 DEBUG [RS:2;jenkins-hbase4:43539] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 22:18:38,424 DEBUG [RS:2;jenkins-hbase4:43539] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 22:18:38,424 DEBUG [RS:2;jenkins-hbase4:43539] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:38,424 DEBUG [RS:2;jenkins-hbase4:43539] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43539,1689200317882' 2023-07-12 22:18:38,424 DEBUG [RS:2;jenkins-hbase4:43539] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 22:18:38,425 DEBUG [RS:2;jenkins-hbase4:43539] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 22:18:38,425 DEBUG [RS:2;jenkins-hbase4:43539] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 22:18:38,425 INFO [RS:2;jenkins-hbase4:43539] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 22:18:38,425 INFO [RS:2;jenkins-hbase4:43539] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 22:18:38,503 INFO [RS:1;jenkins-hbase4:38421] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38421%2C1689200317714, suffix=, logDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,38421,1689200317714, archiveDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/oldWALs, maxLogs=32 2023-07-12 22:18:38,504 INFO [RS:0;jenkins-hbase4:46711] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46711%2C1689200317552, suffix=, logDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,46711,1689200317552, archiveDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/oldWALs, maxLogs=32 2023-07-12 22:18:38,523 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44885,DS-9f007403-0901-431e-b6d9-15ba576dd9b5,DISK] 2023-07-12 22:18:38,527 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38483,DS-b579477e-893c-4f09-8805-bfe8e9b2b85c,DISK] 2023-07-12 22:18:38,527 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41365,DS-fa475831-c6d0-4e59-b8f1-f41fed357e55,DISK] 2023-07-12 22:18:38,527 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44885,DS-9f007403-0901-431e-b6d9-15ba576dd9b5,DISK] 2023-07-12 22:18:38,528 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38483,DS-b579477e-893c-4f09-8805-bfe8e9b2b85c,DISK] 2023-07-12 22:18:38,529 INFO [RS:2;jenkins-hbase4:43539] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43539%2C1689200317882, suffix=, logDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,43539,1689200317882, archiveDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/oldWALs, maxLogs=32 2023-07-12 22:18:38,547 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41365,DS-fa475831-c6d0-4e59-b8f1-f41fed357e55,DISK] 2023-07-12 22:18:38,549 INFO [RS:1;jenkins-hbase4:38421] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,38421,1689200317714/jenkins-hbase4.apache.org%2C38421%2C1689200317714.1689200318504 2023-07-12 22:18:38,559 DEBUG [RS:1;jenkins-hbase4:38421] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44885,DS-9f007403-0901-431e-b6d9-15ba576dd9b5,DISK], DatanodeInfoWithStorage[127.0.0.1:41365,DS-fa475831-c6d0-4e59-b8f1-f41fed357e55,DISK], DatanodeInfoWithStorage[127.0.0.1:38483,DS-b579477e-893c-4f09-8805-bfe8e9b2b85c,DISK]] 2023-07-12 22:18:38,559 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44885,DS-9f007403-0901-431e-b6d9-15ba576dd9b5,DISK] 2023-07-12 22:18:38,559 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41365,DS-fa475831-c6d0-4e59-b8f1-f41fed357e55,DISK] 2023-07-12 22:18:38,559 INFO [RS:0;jenkins-hbase4:46711] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,46711,1689200317552/jenkins-hbase4.apache.org%2C46711%2C1689200317552.1689200318504 2023-07-12 22:18:38,559 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38483,DS-b579477e-893c-4f09-8805-bfe8e9b2b85c,DISK] 2023-07-12 22:18:38,561 DEBUG [RS:0;jenkins-hbase4:46711] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44885,DS-9f007403-0901-431e-b6d9-15ba576dd9b5,DISK], DatanodeInfoWithStorage[127.0.0.1:41365,DS-fa475831-c6d0-4e59-b8f1-f41fed357e55,DISK], DatanodeInfoWithStorage[127.0.0.1:38483,DS-b579477e-893c-4f09-8805-bfe8e9b2b85c,DISK]] 2023-07-12 22:18:38,567 INFO [RS:2;jenkins-hbase4:43539] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,43539,1689200317882/jenkins-hbase4.apache.org%2C43539%2C1689200317882.1689200318529 2023-07-12 22:18:38,567 DEBUG [RS:2;jenkins-hbase4:43539] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38483,DS-b579477e-893c-4f09-8805-bfe8e9b2b85c,DISK], DatanodeInfoWithStorage[127.0.0.1:44885,DS-9f007403-0901-431e-b6d9-15ba576dd9b5,DISK], DatanodeInfoWithStorage[127.0.0.1:41365,DS-fa475831-c6d0-4e59-b8f1-f41fed357e55,DISK]] 2023-07-12 22:18:38,573 DEBUG [jenkins-hbase4:35207] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 22:18:38,573 DEBUG [jenkins-hbase4:35207] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:38,573 DEBUG [jenkins-hbase4:35207] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:38,573 DEBUG [jenkins-hbase4:35207] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:38,573 DEBUG [jenkins-hbase4:35207] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:38,573 DEBUG [jenkins-hbase4:35207] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:38,574 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46711,1689200317552, state=OPENING 2023-07-12 22:18:38,576 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 22:18:38,578 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:38,579 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46711,1689200317552}] 2023-07-12 22:18:38,579 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 22:18:38,686 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-12 22:18:38,733 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:38,734 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:38,736 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34708, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:38,742 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 22:18:38,742 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:38,745 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46711%2C1689200317552.meta, suffix=.meta, logDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,46711,1689200317552, archiveDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/oldWALs, maxLogs=32 2023-07-12 22:18:38,761 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38483,DS-b579477e-893c-4f09-8805-bfe8e9b2b85c,DISK] 2023-07-12 22:18:38,761 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41365,DS-fa475831-c6d0-4e59-b8f1-f41fed357e55,DISK] 2023-07-12 22:18:38,761 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44885,DS-9f007403-0901-431e-b6d9-15ba576dd9b5,DISK] 2023-07-12 22:18:38,764 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,46711,1689200317552/jenkins-hbase4.apache.org%2C46711%2C1689200317552.meta.1689200318745.meta 2023-07-12 22:18:38,765 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38483,DS-b579477e-893c-4f09-8805-bfe8e9b2b85c,DISK], DatanodeInfoWithStorage[127.0.0.1:44885,DS-9f007403-0901-431e-b6d9-15ba576dd9b5,DISK], DatanodeInfoWithStorage[127.0.0.1:41365,DS-fa475831-c6d0-4e59-b8f1-f41fed357e55,DISK]] 2023-07-12 22:18:38,765 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:38,765 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 22:18:38,765 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 22:18:38,765 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 22:18:38,765 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 22:18:38,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:38,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 22:18:38,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 22:18:38,771 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 22:18:38,773 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/info 2023-07-12 22:18:38,773 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/info 2023-07-12 22:18:38,774 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 22:18:38,776 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:38,776 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 22:18:38,779 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/rep_barrier 2023-07-12 22:18:38,779 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/rep_barrier 2023-07-12 22:18:38,780 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 22:18:38,781 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:38,781 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 22:18:38,782 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/table 2023-07-12 22:18:38,782 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/table 2023-07-12 22:18:38,782 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 22:18:38,783 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:38,783 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740 2023-07-12 22:18:38,784 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740 2023-07-12 22:18:38,787 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 22:18:38,788 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 22:18:38,788 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11401061920, jitterRate=0.061806634068489075}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 22:18:38,788 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 22:18:38,789 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689200318733 2023-07-12 22:18:38,794 WARN [ReadOnlyZKClient-127.0.0.1:61599@0x67ea00fe] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 22:18:38,795 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 22:18:38,795 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35207,1689200317344] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:38,796 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 22:18:38,797 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46711,1689200317552, state=OPEN 2023-07-12 22:18:38,798 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34710, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:38,799 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 22:18:38,799 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 22:18:38,800 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35207,1689200317344] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:38,802 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35207,1689200317344] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 22:18:38,803 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 22:18:38,804 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 22:18:38,804 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46711,1689200317552 in 220 msec 2023-07-12 22:18:38,805 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 22:18:38,805 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 384 msec 2023-07-12 22:18:38,806 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 624 msec 2023-07-12 22:18:38,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689200318807, completionTime=-1 2023-07-12 22:18:38,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 22:18:38,807 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 22:18:38,809 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 22:18:38,809 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689200378809 2023-07-12 22:18:38,809 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689200438809 2023-07-12 22:18:38,809 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 1 msec 2023-07-12 22:18:38,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35207,1689200317344-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35207,1689200317344-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35207,1689200317344-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:35207, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:38,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 22:18:38,814 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:38,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:38,814 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:38,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 22:18:38,816 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 22:18:38,816 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:38,817 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/hbase/rsgroup/88c2f1300a1790e16ac535aa0e61d6f0 2023-07-12 22:18:38,817 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:38,817 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/hbase/rsgroup/88c2f1300a1790e16ac535aa0e61d6f0 empty. 2023-07-12 22:18:38,818 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/hbase/rsgroup/88c2f1300a1790e16ac535aa0e61d6f0 2023-07-12 22:18:38,818 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 22:18:38,818 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/hbase/namespace/85f2090dd4ac7973c8f0983cb6cef03c 2023-07-12 22:18:38,818 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/hbase/namespace/85f2090dd4ac7973c8f0983cb6cef03c empty. 2023-07-12 22:18:38,819 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/hbase/namespace/85f2090dd4ac7973c8f0983cb6cef03c 2023-07-12 22:18:38,819 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 22:18:38,836 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:38,843 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 88c2f1300a1790e16ac535aa0e61d6f0, NAME => 'hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp 2023-07-12 22:18:38,844 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:38,845 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 85f2090dd4ac7973c8f0983cb6cef03c, NAME => 'hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp 2023-07-12 22:18:38,859 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:38,859 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 88c2f1300a1790e16ac535aa0e61d6f0, disabling compactions & flushes 2023-07-12 22:18:38,859 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. 2023-07-12 22:18:38,859 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. 2023-07-12 22:18:38,859 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. after waiting 0 ms 2023-07-12 22:18:38,859 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. 2023-07-12 22:18:38,859 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. 2023-07-12 22:18:38,859 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 88c2f1300a1790e16ac535aa0e61d6f0: 2023-07-12 22:18:38,862 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:38,863 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:38,863 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 85f2090dd4ac7973c8f0983cb6cef03c, disabling compactions & flushes 2023-07-12 22:18:38,863 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. 2023-07-12 22:18:38,863 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. 2023-07-12 22:18:38,863 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. after waiting 0 ms 2023-07-12 22:18:38,863 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. 2023-07-12 22:18:38,863 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. 2023-07-12 22:18:38,863 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 85f2090dd4ac7973c8f0983cb6cef03c: 2023-07-12 22:18:38,863 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689200318863"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200318863"}]},"ts":"1689200318863"} 2023-07-12 22:18:38,865 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:38,866 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:38,866 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:38,866 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200318866"}]},"ts":"1689200318866"} 2023-07-12 22:18:38,867 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200318867"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200318867"}]},"ts":"1689200318867"} 2023-07-12 22:18:38,867 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 22:18:38,868 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:38,869 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:38,869 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200318869"}]},"ts":"1689200318869"} 2023-07-12 22:18:38,870 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 22:18:38,871 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:38,871 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:38,871 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:38,871 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:38,871 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:38,872 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=88c2f1300a1790e16ac535aa0e61d6f0, ASSIGN}] 2023-07-12 22:18:38,872 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=88c2f1300a1790e16ac535aa0e61d6f0, ASSIGN 2023-07-12 22:18:38,873 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=88c2f1300a1790e16ac535aa0e61d6f0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43539,1689200317882; forceNewPlan=false, retain=false 2023-07-12 22:18:38,874 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:38,874 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:38,874 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:38,874 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:38,874 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:38,874 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=85f2090dd4ac7973c8f0983cb6cef03c, ASSIGN}] 2023-07-12 22:18:38,876 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=85f2090dd4ac7973c8f0983cb6cef03c, ASSIGN 2023-07-12 22:18:38,876 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=85f2090dd4ac7973c8f0983cb6cef03c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38421,1689200317714; forceNewPlan=false, retain=false 2023-07-12 22:18:38,876 INFO [jenkins-hbase4:35207] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 22:18:38,878 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=88c2f1300a1790e16ac535aa0e61d6f0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:38,878 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689200318878"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200318878"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200318878"}]},"ts":"1689200318878"} 2023-07-12 22:18:38,879 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=85f2090dd4ac7973c8f0983cb6cef03c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:38,879 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200318879"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200318879"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200318879"}]},"ts":"1689200318879"} 2023-07-12 22:18:38,884 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 88c2f1300a1790e16ac535aa0e61d6f0, server=jenkins-hbase4.apache.org,43539,1689200317882}] 2023-07-12 22:18:38,885 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 85f2090dd4ac7973c8f0983cb6cef03c, server=jenkins-hbase4.apache.org,38421,1689200317714}] 2023-07-12 22:18:39,038 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:39,038 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:39,038 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:39,038 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 22:18:39,040 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53764, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:39,040 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44984, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 22:18:39,044 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. 2023-07-12 22:18:39,044 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 85f2090dd4ac7973c8f0983cb6cef03c, NAME => 'hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:39,045 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 85f2090dd4ac7973c8f0983cb6cef03c 2023-07-12 22:18:39,045 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:39,045 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 85f2090dd4ac7973c8f0983cb6cef03c 2023-07-12 22:18:39,045 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 85f2090dd4ac7973c8f0983cb6cef03c 2023-07-12 22:18:39,045 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. 2023-07-12 22:18:39,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 88c2f1300a1790e16ac535aa0e61d6f0, NAME => 'hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:39,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 22:18:39,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. service=MultiRowMutationService 2023-07-12 22:18:39,046 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 22:18:39,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 88c2f1300a1790e16ac535aa0e61d6f0 2023-07-12 22:18:39,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:39,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 88c2f1300a1790e16ac535aa0e61d6f0 2023-07-12 22:18:39,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 88c2f1300a1790e16ac535aa0e61d6f0 2023-07-12 22:18:39,047 INFO [StoreOpener-85f2090dd4ac7973c8f0983cb6cef03c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 85f2090dd4ac7973c8f0983cb6cef03c 2023-07-12 22:18:39,049 DEBUG [StoreOpener-85f2090dd4ac7973c8f0983cb6cef03c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/namespace/85f2090dd4ac7973c8f0983cb6cef03c/info 2023-07-12 22:18:39,049 DEBUG [StoreOpener-85f2090dd4ac7973c8f0983cb6cef03c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/namespace/85f2090dd4ac7973c8f0983cb6cef03c/info 2023-07-12 22:18:39,049 INFO [StoreOpener-85f2090dd4ac7973c8f0983cb6cef03c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 85f2090dd4ac7973c8f0983cb6cef03c columnFamilyName info 2023-07-12 22:18:39,050 INFO [StoreOpener-85f2090dd4ac7973c8f0983cb6cef03c-1] regionserver.HStore(310): Store=85f2090dd4ac7973c8f0983cb6cef03c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:39,050 INFO [StoreOpener-88c2f1300a1790e16ac535aa0e61d6f0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 88c2f1300a1790e16ac535aa0e61d6f0 2023-07-12 22:18:39,051 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/namespace/85f2090dd4ac7973c8f0983cb6cef03c 2023-07-12 22:18:39,051 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/namespace/85f2090dd4ac7973c8f0983cb6cef03c 2023-07-12 22:18:39,052 DEBUG [StoreOpener-88c2f1300a1790e16ac535aa0e61d6f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/rsgroup/88c2f1300a1790e16ac535aa0e61d6f0/m 2023-07-12 22:18:39,052 DEBUG [StoreOpener-88c2f1300a1790e16ac535aa0e61d6f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/rsgroup/88c2f1300a1790e16ac535aa0e61d6f0/m 2023-07-12 22:18:39,052 INFO [StoreOpener-88c2f1300a1790e16ac535aa0e61d6f0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 88c2f1300a1790e16ac535aa0e61d6f0 columnFamilyName m 2023-07-12 22:18:39,053 INFO [StoreOpener-88c2f1300a1790e16ac535aa0e61d6f0-1] regionserver.HStore(310): Store=88c2f1300a1790e16ac535aa0e61d6f0/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:39,053 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/rsgroup/88c2f1300a1790e16ac535aa0e61d6f0 2023-07-12 22:18:39,054 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/rsgroup/88c2f1300a1790e16ac535aa0e61d6f0 2023-07-12 22:18:39,054 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 85f2090dd4ac7973c8f0983cb6cef03c 2023-07-12 22:18:39,056 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 88c2f1300a1790e16ac535aa0e61d6f0 2023-07-12 22:18:39,061 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/namespace/85f2090dd4ac7973c8f0983cb6cef03c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:39,062 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 85f2090dd4ac7973c8f0983cb6cef03c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10265467840, jitterRate=-0.04395380616188049}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:39,062 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/rsgroup/88c2f1300a1790e16ac535aa0e61d6f0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:39,062 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 85f2090dd4ac7973c8f0983cb6cef03c: 2023-07-12 22:18:39,062 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 88c2f1300a1790e16ac535aa0e61d6f0; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3e25bdeb, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:39,062 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 88c2f1300a1790e16ac535aa0e61d6f0: 2023-07-12 22:18:39,063 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c., pid=9, masterSystemTime=1689200319038 2023-07-12 22:18:39,065 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0., pid=8, masterSystemTime=1689200319038 2023-07-12 22:18:39,068 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. 2023-07-12 22:18:39,069 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. 2023-07-12 22:18:39,069 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=85f2090dd4ac7973c8f0983cb6cef03c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:39,069 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689200319069"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200319069"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200319069"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200319069"}]},"ts":"1689200319069"} 2023-07-12 22:18:39,070 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. 2023-07-12 22:18:39,071 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. 2023-07-12 22:18:39,071 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=88c2f1300a1790e16ac535aa0e61d6f0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:39,071 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689200319071"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200319071"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200319071"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200319071"}]},"ts":"1689200319071"} 2023-07-12 22:18:39,073 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-12 22:18:39,074 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 85f2090dd4ac7973c8f0983cb6cef03c, server=jenkins-hbase4.apache.org,38421,1689200317714 in 186 msec 2023-07-12 22:18:39,074 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-12 22:18:39,074 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 88c2f1300a1790e16ac535aa0e61d6f0, server=jenkins-hbase4.apache.org,43539,1689200317882 in 188 msec 2023-07-12 22:18:39,075 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-12 22:18:39,076 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=85f2090dd4ac7973c8f0983cb6cef03c, ASSIGN in 199 msec 2023-07-12 22:18:39,077 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:39,077 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-12 22:18:39,077 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=88c2f1300a1790e16ac535aa0e61d6f0, ASSIGN in 202 msec 2023-07-12 22:18:39,077 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200319077"}]},"ts":"1689200319077"} 2023-07-12 22:18:39,078 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:39,078 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200319078"}]},"ts":"1689200319078"} 2023-07-12 22:18:39,078 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 22:18:39,079 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 22:18:39,082 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:39,084 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:39,084 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 268 msec 2023-07-12 22:18:39,085 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 284 msec 2023-07-12 22:18:39,105 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35207,1689200317344] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:39,106 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44986, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:39,108 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 22:18:39,108 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 22:18:39,113 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:39,114 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:39,116 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 22:18:39,116 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 22:18:39,119 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 22:18:39,119 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 22:18:39,119 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:39,123 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:39,124 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53780, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:39,127 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 22:18:39,136 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 22:18:39,139 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-07-12 22:18:39,148 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 22:18:39,155 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 22:18:39,164 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 15 msec 2023-07-12 22:18:39,173 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 22:18:39,176 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 22:18:39,177 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.137sec 2023-07-12 22:18:39,177 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 22:18:39,177 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 22:18:39,177 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 22:18:39,177 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35207,1689200317344-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 22:18:39,177 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35207,1689200317344-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 22:18:39,180 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 22:18:39,250 DEBUG [Listener at localhost/36883] zookeeper.ReadOnlyZKClient(139): Connect 0x3387784e to 127.0.0.1:61599 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:39,256 DEBUG [Listener at localhost/36883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4b680df9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:39,257 DEBUG [hconnection-0x4871974e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:39,259 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34718, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:39,261 INFO [Listener at localhost/36883] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,35207,1689200317344 2023-07-12 22:18:39,261 INFO [Listener at localhost/36883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:39,263 DEBUG [Listener at localhost/36883] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 22:18:39,265 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36218, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 22:18:39,268 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 22:18:39,268 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:39,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-12 22:18:39,269 DEBUG [Listener at localhost/36883] zookeeper.ReadOnlyZKClient(139): Connect 0x77684c01 to 127.0.0.1:61599 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:39,279 DEBUG [Listener at localhost/36883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33677e45, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:39,279 INFO [Listener at localhost/36883] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:61599 2023-07-12 22:18:39,283 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:39,283 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015b9dd68f000a connected 2023-07-12 22:18:39,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:39,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:39,289 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 22:18:39,305 INFO [Listener at localhost/36883] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 22:18:39,305 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:39,305 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:39,305 INFO [Listener at localhost/36883] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 22:18:39,306 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 22:18:39,306 INFO [Listener at localhost/36883] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 22:18:39,306 INFO [Listener at localhost/36883] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 22:18:39,307 INFO [Listener at localhost/36883] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32935 2023-07-12 22:18:39,307 INFO [Listener at localhost/36883] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 22:18:39,309 DEBUG [Listener at localhost/36883] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 22:18:39,309 INFO [Listener at localhost/36883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:39,311 INFO [Listener at localhost/36883] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 22:18:39,312 INFO [Listener at localhost/36883] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32935 connecting to ZooKeeper ensemble=127.0.0.1:61599 2023-07-12 22:18:39,316 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:329350x0, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 22:18:39,318 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32935-0x1015b9dd68f000b connected 2023-07-12 22:18:39,318 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(162): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 22:18:39,319 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(162): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 22:18:39,319 DEBUG [Listener at localhost/36883] zookeeper.ZKUtil(164): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 22:18:39,322 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32935 2023-07-12 22:18:39,323 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32935 2023-07-12 22:18:39,325 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32935 2023-07-12 22:18:39,325 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32935 2023-07-12 22:18:39,325 DEBUG [Listener at localhost/36883] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32935 2023-07-12 22:18:39,327 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 22:18:39,327 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 22:18:39,327 INFO [Listener at localhost/36883] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 22:18:39,328 INFO [Listener at localhost/36883] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 22:18:39,328 INFO [Listener at localhost/36883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 22:18:39,328 INFO [Listener at localhost/36883] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 22:18:39,328 INFO [Listener at localhost/36883] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 22:18:39,328 INFO [Listener at localhost/36883] http.HttpServer(1146): Jetty bound to port 40353 2023-07-12 22:18:39,329 INFO [Listener at localhost/36883] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 22:18:39,335 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:39,335 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1da83b10{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/hadoop.log.dir/,AVAILABLE} 2023-07-12 22:18:39,335 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:39,335 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@687be613{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 22:18:39,449 INFO [Listener at localhost/36883] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 22:18:39,450 INFO [Listener at localhost/36883] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 22:18:39,450 INFO [Listener at localhost/36883] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 22:18:39,451 INFO [Listener at localhost/36883] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 22:18:39,451 INFO [Listener at localhost/36883] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 22:18:39,452 INFO [Listener at localhost/36883] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4aa71840{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/java.io.tmpdir/jetty-0_0_0_0-40353-hbase-server-2_4_18-SNAPSHOT_jar-_-any-209768529056999912/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:39,456 INFO [Listener at localhost/36883] server.AbstractConnector(333): Started ServerConnector@66e1c64c{HTTP/1.1, (http/1.1)}{0.0.0.0:40353} 2023-07-12 22:18:39,456 INFO [Listener at localhost/36883] server.Server(415): Started @44529ms 2023-07-12 22:18:39,472 INFO [RS:3;jenkins-hbase4:32935] regionserver.HRegionServer(951): ClusterId : f6676a37-8259-469d-b369-c5d9d72f0308 2023-07-12 22:18:39,473 DEBUG [RS:3;jenkins-hbase4:32935] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 22:18:39,475 DEBUG [RS:3;jenkins-hbase4:32935] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 22:18:39,475 DEBUG [RS:3;jenkins-hbase4:32935] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 22:18:39,478 DEBUG [RS:3;jenkins-hbase4:32935] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 22:18:39,481 DEBUG [RS:3;jenkins-hbase4:32935] zookeeper.ReadOnlyZKClient(139): Connect 0x23688a34 to 127.0.0.1:61599 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 22:18:39,485 DEBUG [RS:3;jenkins-hbase4:32935] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2daf6029, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 22:18:39,486 DEBUG [RS:3;jenkins-hbase4:32935] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2990e144, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:39,498 DEBUG [RS:3;jenkins-hbase4:32935] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:32935 2023-07-12 22:18:39,498 INFO [RS:3;jenkins-hbase4:32935] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 22:18:39,499 INFO [RS:3;jenkins-hbase4:32935] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 22:18:39,499 DEBUG [RS:3;jenkins-hbase4:32935] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 22:18:39,499 INFO [RS:3;jenkins-hbase4:32935] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35207,1689200317344 with isa=jenkins-hbase4.apache.org/172.31.14.131:32935, startcode=1689200319304 2023-07-12 22:18:39,499 DEBUG [RS:3;jenkins-hbase4:32935] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 22:18:39,501 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59375, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 22:18:39,502 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35207] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:39,502 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 22:18:39,502 DEBUG [RS:3;jenkins-hbase4:32935] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3 2023-07-12 22:18:39,502 DEBUG [RS:3;jenkins-hbase4:32935] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34687 2023-07-12 22:18:39,502 DEBUG [RS:3;jenkins-hbase4:32935] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37621 2023-07-12 22:18:39,508 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:39,508 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:39,508 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:39,508 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:39,508 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:39,508 DEBUG [RS:3;jenkins-hbase4:32935] zookeeper.ZKUtil(162): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:39,508 WARN [RS:3;jenkins-hbase4:32935] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 22:18:39,508 INFO [RS:3;jenkins-hbase4:32935] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 22:18:39,508 DEBUG [RS:3;jenkins-hbase4:32935] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:39,508 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,32935,1689200319304] 2023-07-12 22:18:39,509 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 22:18:39,509 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:39,510 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:39,510 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:39,510 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35207,1689200317344] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 22:18:39,511 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:39,511 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:39,511 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:39,511 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:39,512 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:39,512 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:39,512 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:39,512 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:39,513 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:39,513 DEBUG [RS:3;jenkins-hbase4:32935] zookeeper.ZKUtil(162): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:39,513 DEBUG [RS:3;jenkins-hbase4:32935] zookeeper.ZKUtil(162): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:39,514 DEBUG [RS:3;jenkins-hbase4:32935] zookeeper.ZKUtil(162): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:39,514 DEBUG [RS:3;jenkins-hbase4:32935] zookeeper.ZKUtil(162): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:39,515 DEBUG [RS:3;jenkins-hbase4:32935] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 22:18:39,515 INFO [RS:3;jenkins-hbase4:32935] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 22:18:39,516 INFO [RS:3;jenkins-hbase4:32935] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 22:18:39,516 INFO [RS:3;jenkins-hbase4:32935] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 22:18:39,516 INFO [RS:3;jenkins-hbase4:32935] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:39,517 INFO [RS:3;jenkins-hbase4:32935] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 22:18:39,518 INFO [RS:3;jenkins-hbase4:32935] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:39,519 DEBUG [RS:3;jenkins-hbase4:32935] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:39,519 DEBUG [RS:3;jenkins-hbase4:32935] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:39,519 DEBUG [RS:3;jenkins-hbase4:32935] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:39,519 DEBUG [RS:3;jenkins-hbase4:32935] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:39,519 DEBUG [RS:3;jenkins-hbase4:32935] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:39,519 DEBUG [RS:3;jenkins-hbase4:32935] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 22:18:39,520 DEBUG [RS:3;jenkins-hbase4:32935] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:39,520 DEBUG [RS:3;jenkins-hbase4:32935] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:39,520 DEBUG [RS:3;jenkins-hbase4:32935] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:39,520 DEBUG [RS:3;jenkins-hbase4:32935] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 22:18:39,521 INFO [RS:3;jenkins-hbase4:32935] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:39,521 INFO [RS:3;jenkins-hbase4:32935] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:39,521 INFO [RS:3;jenkins-hbase4:32935] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:39,536 INFO [RS:3;jenkins-hbase4:32935] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 22:18:39,536 INFO [RS:3;jenkins-hbase4:32935] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32935,1689200319304-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 22:18:39,551 INFO [RS:3;jenkins-hbase4:32935] regionserver.Replication(203): jenkins-hbase4.apache.org,32935,1689200319304 started 2023-07-12 22:18:39,551 INFO [RS:3;jenkins-hbase4:32935] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,32935,1689200319304, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:32935, sessionid=0x1015b9dd68f000b 2023-07-12 22:18:39,551 DEBUG [RS:3;jenkins-hbase4:32935] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 22:18:39,551 DEBUG [RS:3;jenkins-hbase4:32935] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:39,551 DEBUG [RS:3;jenkins-hbase4:32935] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32935,1689200319304' 2023-07-12 22:18:39,551 DEBUG [RS:3;jenkins-hbase4:32935] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 22:18:39,552 DEBUG [RS:3;jenkins-hbase4:32935] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 22:18:39,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:39,552 DEBUG [RS:3;jenkins-hbase4:32935] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 22:18:39,552 DEBUG [RS:3;jenkins-hbase4:32935] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 22:18:39,552 DEBUG [RS:3;jenkins-hbase4:32935] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:39,552 DEBUG [RS:3;jenkins-hbase4:32935] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32935,1689200319304' 2023-07-12 22:18:39,553 DEBUG [RS:3;jenkins-hbase4:32935] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 22:18:39,553 DEBUG [RS:3;jenkins-hbase4:32935] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 22:18:39,553 DEBUG [RS:3;jenkins-hbase4:32935] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 22:18:39,554 INFO [RS:3;jenkins-hbase4:32935] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 22:18:39,554 INFO [RS:3;jenkins-hbase4:32935] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 22:18:39,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:39,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:39,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:39,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:39,564 DEBUG [hconnection-0x9e79009-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:39,566 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34728, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:39,569 DEBUG [hconnection-0x9e79009-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 22:18:39,572 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44988, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 22:18:39,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:39,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:39,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35207] to rsgroup master 2023-07-12 22:18:39,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:39,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36218 deadline: 1689201519576, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. 2023-07-12 22:18:39,577 WARN [Listener at localhost/36883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:39,579 INFO [Listener at localhost/36883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:39,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:39,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:39,580 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32935, jenkins-hbase4.apache.org:38421, jenkins-hbase4.apache.org:43539, jenkins-hbase4.apache.org:46711], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:39,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:39,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:39,647 INFO [Listener at localhost/36883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=564 (was 515) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d4078ed-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-558-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x3d4078ed-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x67ea00fe-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/36883.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2082160479_17 at /127.0.0.1:45432 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36883-SendThread(127.0.0.1:61599) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39075,1689200311877 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1533168194-2310 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 34687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x9e79009-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54162@0x23510898-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp292265569-2316 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1984785911.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 40511 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 1 on default port 40511 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=32935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1304710478-2245-acceptor-0@21bc1666-ServerConnector@1cfaac5f{HTTP/1.1, (http/1.1)}{0.0.0.0:45293} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp979295654-2281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp218261346-2589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 36883 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1533168194-2309 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData-prefix:jenkins-hbase4.apache.org,35207,1689200317344 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp979295654-2278 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x2f42a090-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x2f42a090-SendThread(127.0.0.1:61599) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp292265569-2319-acceptor-0@287ecc86-ServerConnector@1fac623c{HTTP/1.1, (http/1.1)}{0.0.0.0:35925} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1304710478-2250 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1666391846@qtp-447905619-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ForkJoinPool-3-worker-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:35207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: jenkins-hbase4:43539Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:32935-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36883.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Server handler 4 on default port 34299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 4 on default port 34687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:33559 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x1ed9f5b1-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 2067732400@qtp-319522501-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44733 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/36883-SendThread(127.0.0.1:61599) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/36883-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp218261346-2585-acceptor-0@5de3e452-ServerConnector@66e1c64c{HTTP/1.1, (http/1.1)}{0.0.0.0:40353} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_48616539_17 at /127.0.0.1:42730 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_392540038_17 at /127.0.0.1:42720 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:32935Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@45f826db[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp979295654-2274 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1984785911.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-648efab3-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 36883 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1304710478-2244 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1984785911.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36883-SendThread(127.0.0.1:61599) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp218261346-2586 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1437209379-2220 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x23688a34-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1533168194-2311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-563-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:33559 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3-prefix:jenkins-hbase4.apache.org,46711,1689200317552 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data5/current/BP-1534897092-172.31.14.131-1689200316579 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d4078ed-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_392540038_17 at /127.0.0.1:45488 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1437209379-2215 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:34687 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x3387784e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:34687 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 2 on default port 34687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp218261346-2591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:32935 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp292265569-2320 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x67ea00fe sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/508143200.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@22df96ff[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@57fdb766 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 600593200@qtp-714271145-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1437209379-2216 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_392540038_17 at /127.0.0.1:49836 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:61599): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: IPC Server handler 0 on default port 34299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1533168194-2306 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d4078ed-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x3387784e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/508143200.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@1922b0ea java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1533168194-2305-acceptor-0@fbff00c-ServerConnector@40742411{HTTP/1.1, (http/1.1)}{0.0.0.0:41009} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1348121467_17 at /127.0.0.1:42684 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:34687 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x23688a34-SendThread(127.0.0.1:61599) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x23688a34 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/508143200.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp218261346-2588 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp292265569-2322 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-63016ce2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x77684c01-SendThread(127.0.0.1:61599) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d4078ed-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1437209379-2218 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x77684c01-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=32935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-51c7c9ec-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x09e3e8ed-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/36883 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp979295654-2275-acceptor-0@71b0c722-ServerConnector@2a64ec80{HTTP/1.1, (http/1.1)}{0.0.0.0:41381} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x2f42a090 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/508143200.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:34687 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x3d4078ed-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/36883.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data6/current/BP-1534897092-172.31.14.131-1689200316579 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9e79009-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 40511 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@1bbd09cb java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data1/current/BP-1534897092-172.31.14.131-1689200316579 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@688dcf59 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:61599 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: BP-1534897092-172.31.14.131-1689200316579 heartbeating to localhost/127.0.0.1:34687 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data4/current/BP-1534897092-172.31.14.131-1689200316579 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1293856938@qtp-999667468-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/36883.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Listener at localhost/36883-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 40511 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=32935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:38421 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 36883 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-567-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:33559 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@286e696b java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:43539-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@54f2742a sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1534897092-172.31.14.131-1689200316579 heartbeating to localhost/127.0.0.1:34687 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2082160479_17 at /127.0.0.1:49834 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43911-SendThread(127.0.0.1:54162) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: RS:2;jenkins-hbase4:43539 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp292265569-2317 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1984785911.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp218261346-2584 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1984785911.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:33559 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_48616539_17 at /127.0.0.1:45554 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3-prefix:jenkins-hbase4.apache.org,38421,1689200317714 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x3387784e-SendThread(127.0.0.1:61599) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 34299 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1533168194-2307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:33559 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1304710478-2248 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp218261346-2587 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 34299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 3 on default port 36883 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3-prefix:jenkins-hbase4.apache.org,46711,1689200317552.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@2adac310 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 40511 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 914453684@qtp-714271145-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46647 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_392540038_17 at /127.0.0.1:45506 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36883-SendThread(127.0.0.1:61599) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2082160479_17 at /127.0.0.1:45472 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54162@0x23510898-SendThread(127.0.0.1:54162) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: Listener at localhost/36883-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x77684c01 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/508143200.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-544-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_48616539_17 at /127.0.0.1:49844 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 829262651@qtp-319522501-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Listener at localhost/36883-SendThread(127.0.0.1:61599) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1533168194-2304 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1984785911.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=32935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689200318293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x3d4078ed-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:38421Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1304710478-2247 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp292265569-2318 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1984785911.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_48616539_17 at /127.0.0.1:45496 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:33559 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 40511 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-549-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-1534897092-172.31.14.131-1689200316579 heartbeating to localhost/127.0.0.1:34687 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5599d434 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1348121467_17 at /127.0.0.1:45456 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x1ed9f5b1-SendThread(127.0.0.1:61599) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: M:0;jenkins-hbase4:35207 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:34687 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Session-HouseKeeper-5f282e54-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x67ea00fe-SendThread(127.0.0.1:61599) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54162@0x23510898 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/508143200.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1304710478-2246 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:34687 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x09e3e8ed sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/508143200.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 34299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x1ed9f5b1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/508143200.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@8c70525 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43911-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1348121467_17 at /127.0.0.1:42704 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@366d44ea java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 34687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 34299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data2/current/BP-1534897092-172.31.14.131-1689200316579 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 34687 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61599@0x09e3e8ed-SendThread(127.0.0.1:61599) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/36883-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@7962d3ca sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:46711-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 36883 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/36883-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:46711Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:34687 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:1;jenkins-hbase4:38421-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 36883 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:33559 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689200318293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:33559 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1304710478-2249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35207,1689200317344 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp1437209379-2217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:33559 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 3 on default port 34687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins@localhost:34687 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@1c092624 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@14b1d995[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp979295654-2279 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_392540038_17 at /127.0.0.1:49858 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 826083000@qtp-447905619-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40387 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=32935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: CacheReplicationMonitor(1714457014) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:34687 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:46711 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1437209379-2214-acceptor-0@6874e106-ServerConnector@75842d05{HTTP/1.1, (http/1.1)}{0.0.0.0:37621} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp979295654-2276 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-a311fa-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:34687 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp292265569-2321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1531831524@qtp-999667468-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45267 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp979295654-2280 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=32935 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2082160479_17 at /127.0.0.1:42718 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d4078ed-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data3/current/BP-1534897092-172.31.14.131-1689200316579 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1348121467_17 at /127.0.0.1:49796 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@5f680cfd java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1304710478-2251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_392540038_17 at /127.0.0.1:42744 [Receiving block BP-1534897092-172.31.14.131-1689200316579:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36883-SendThread(127.0.0.1:61599) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp218261346-2590 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1437209379-2219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-554-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1533168194-2308 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp292265569-2315 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1984785911.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36883-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp979295654-2277 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1437209379-2213 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1984785911.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3-prefix:jenkins-hbase4.apache.org,43539,1689200317882 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1534897092-172.31.14.131-1689200316579:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x4871974e-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=839 (was 799) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=392 (was 376) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 175), AvailableMemoryMB=6761 (was 6693) - AvailableMemoryMB LEAK? - 2023-07-12 22:18:39,651 WARN [Listener at localhost/36883] hbase.ResourceChecker(130): Thread=564 is superior to 500 2023-07-12 22:18:39,660 INFO [RS:3;jenkins-hbase4:32935] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32935%2C1689200319304, suffix=, logDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,32935,1689200319304, archiveDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/oldWALs, maxLogs=32 2023-07-12 22:18:39,676 INFO [Listener at localhost/36883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=564, OpenFileDescriptor=839, MaxFileDescriptor=60000, SystemLoadAverage=392, ProcessCount=173, AvailableMemoryMB=6760 2023-07-12 22:18:39,676 WARN [Listener at localhost/36883] hbase.ResourceChecker(130): Thread=564 is superior to 500 2023-07-12 22:18:39,676 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-12 22:18:39,688 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38483,DS-b579477e-893c-4f09-8805-bfe8e9b2b85c,DISK] 2023-07-12 22:18:39,690 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41365,DS-fa475831-c6d0-4e59-b8f1-f41fed357e55,DISK] 2023-07-12 22:18:39,690 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44885,DS-9f007403-0901-431e-b6d9-15ba576dd9b5,DISK] 2023-07-12 22:18:39,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:39,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:39,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:39,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:39,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:39,693 INFO [RS:3;jenkins-hbase4:32935] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/WALs/jenkins-hbase4.apache.org,32935,1689200319304/jenkins-hbase4.apache.org%2C32935%2C1689200319304.1689200319661 2023-07-12 22:18:39,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:39,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:39,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:39,694 DEBUG [RS:3;jenkins-hbase4:32935] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38483,DS-b579477e-893c-4f09-8805-bfe8e9b2b85c,DISK], DatanodeInfoWithStorage[127.0.0.1:41365,DS-fa475831-c6d0-4e59-b8f1-f41fed357e55,DISK], DatanodeInfoWithStorage[127.0.0.1:44885,DS-9f007403-0901-431e-b6d9-15ba576dd9b5,DISK]] 2023-07-12 22:18:39,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:39,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:39,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:39,702 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:39,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:39,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:39,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:39,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:39,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:39,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:39,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:39,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35207] to rsgroup master 2023-07-12 22:18:39,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:39,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36218 deadline: 1689201519714, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. 2023-07-12 22:18:39,715 WARN [Listener at localhost/36883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:39,717 INFO [Listener at localhost/36883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:39,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:39,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:39,718 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32935, jenkins-hbase4.apache.org:38421, jenkins-hbase4.apache.org:43539, jenkins-hbase4.apache.org:46711], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:39,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:39,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:39,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:39,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-12 22:18:39,723 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:39,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-12 22:18:39,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 22:18:39,725 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:39,725 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:39,725 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:39,729 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 22:18:39,731 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/default/t1/a6c0232a8e108f11337936787e2a4d73 2023-07-12 22:18:39,731 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/default/t1/a6c0232a8e108f11337936787e2a4d73 empty. 2023-07-12 22:18:39,732 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/default/t1/a6c0232a8e108f11337936787e2a4d73 2023-07-12 22:18:39,732 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-12 22:18:39,755 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-12 22:18:39,756 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => a6c0232a8e108f11337936787e2a4d73, NAME => 't1,,1689200319720.a6c0232a8e108f11337936787e2a4d73.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp 2023-07-12 22:18:39,775 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:39,775 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing a6c0232a8e108f11337936787e2a4d73, disabling compactions & flushes 2023-07-12 22:18:39,775 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73. 2023-07-12 22:18:39,775 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73. 2023-07-12 22:18:39,775 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73. after waiting 0 ms 2023-07-12 22:18:39,775 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73. 2023-07-12 22:18:39,775 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73. 2023-07-12 22:18:39,775 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for a6c0232a8e108f11337936787e2a4d73: 2023-07-12 22:18:39,777 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 22:18:39,778 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689200319778"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200319778"}]},"ts":"1689200319778"} 2023-07-12 22:18:39,780 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 22:18:39,782 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 22:18:39,782 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200319782"}]},"ts":"1689200319782"} 2023-07-12 22:18:39,783 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-12 22:18:39,789 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 22:18:39,789 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 22:18:39,789 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 22:18:39,789 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 22:18:39,789 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 22:18:39,789 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 22:18:39,790 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=a6c0232a8e108f11337936787e2a4d73, ASSIGN}] 2023-07-12 22:18:39,792 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=a6c0232a8e108f11337936787e2a4d73, ASSIGN 2023-07-12 22:18:39,793 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=a6c0232a8e108f11337936787e2a4d73, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46711,1689200317552; forceNewPlan=false, retain=false 2023-07-12 22:18:39,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 22:18:39,946 INFO [jenkins-hbase4:35207] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 22:18:39,948 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=a6c0232a8e108f11337936787e2a4d73, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:39,948 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689200319948"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200319948"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200319948"}]},"ts":"1689200319948"} 2023-07-12 22:18:39,950 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure a6c0232a8e108f11337936787e2a4d73, server=jenkins-hbase4.apache.org,46711,1689200317552}] 2023-07-12 22:18:40,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 22:18:40,107 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73. 2023-07-12 22:18:40,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a6c0232a8e108f11337936787e2a4d73, NAME => 't1,,1689200319720.a6c0232a8e108f11337936787e2a4d73.', STARTKEY => '', ENDKEY => ''} 2023-07-12 22:18:40,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 a6c0232a8e108f11337936787e2a4d73 2023-07-12 22:18:40,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 22:18:40,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a6c0232a8e108f11337936787e2a4d73 2023-07-12 22:18:40,108 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a6c0232a8e108f11337936787e2a4d73 2023-07-12 22:18:40,109 INFO [StoreOpener-a6c0232a8e108f11337936787e2a4d73-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region a6c0232a8e108f11337936787e2a4d73 2023-07-12 22:18:40,110 DEBUG [StoreOpener-a6c0232a8e108f11337936787e2a4d73-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/default/t1/a6c0232a8e108f11337936787e2a4d73/cf1 2023-07-12 22:18:40,110 DEBUG [StoreOpener-a6c0232a8e108f11337936787e2a4d73-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/default/t1/a6c0232a8e108f11337936787e2a4d73/cf1 2023-07-12 22:18:40,111 INFO [StoreOpener-a6c0232a8e108f11337936787e2a4d73-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a6c0232a8e108f11337936787e2a4d73 columnFamilyName cf1 2023-07-12 22:18:40,111 INFO [StoreOpener-a6c0232a8e108f11337936787e2a4d73-1] regionserver.HStore(310): Store=a6c0232a8e108f11337936787e2a4d73/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 22:18:40,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/default/t1/a6c0232a8e108f11337936787e2a4d73 2023-07-12 22:18:40,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/default/t1/a6c0232a8e108f11337936787e2a4d73 2023-07-12 22:18:40,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a6c0232a8e108f11337936787e2a4d73 2023-07-12 22:18:40,117 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/default/t1/a6c0232a8e108f11337936787e2a4d73/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 22:18:40,118 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a6c0232a8e108f11337936787e2a4d73; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10690253600, jitterRate=-0.004392549395561218}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 22:18:40,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a6c0232a8e108f11337936787e2a4d73: 2023-07-12 22:18:40,119 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73., pid=14, masterSystemTime=1689200320103 2023-07-12 22:18:40,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73. 2023-07-12 22:18:40,120 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73. 2023-07-12 22:18:40,121 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=a6c0232a8e108f11337936787e2a4d73, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:40,121 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689200320121"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689200320121"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689200320121"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689200320121"}]},"ts":"1689200320121"} 2023-07-12 22:18:40,123 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-12 22:18:40,124 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure a6c0232a8e108f11337936787e2a4d73, server=jenkins-hbase4.apache.org,46711,1689200317552 in 172 msec 2023-07-12 22:18:40,125 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-12 22:18:40,126 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=a6c0232a8e108f11337936787e2a4d73, ASSIGN in 335 msec 2023-07-12 22:18:40,126 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 22:18:40,126 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200320126"}]},"ts":"1689200320126"} 2023-07-12 22:18:40,129 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-12 22:18:40,133 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 22:18:40,134 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 413 msec 2023-07-12 22:18:40,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 22:18:40,335 INFO [Listener at localhost/36883] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-12 22:18:40,336 DEBUG [Listener at localhost/36883] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-12 22:18:40,336 INFO [Listener at localhost/36883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:40,338 INFO [Listener at localhost/36883] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-12 22:18:40,338 INFO [Listener at localhost/36883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:40,338 INFO [Listener at localhost/36883] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-12 22:18:40,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 22:18:40,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-12 22:18:40,344 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 22:18:40,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-12 22:18:40,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:36218 deadline: 1689200380340, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-12 22:18:40,346 INFO [Listener at localhost/36883] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:40,351 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=11 msec 2023-07-12 22:18:40,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:40,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:40,448 INFO [Listener at localhost/36883] client.HBaseAdmin$15(890): Started disable of t1 2023-07-12 22:18:40,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-12 22:18:40,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-12 22:18:40,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 22:18:40,453 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200320453"}]},"ts":"1689200320453"} 2023-07-12 22:18:40,454 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-12 22:18:40,455 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-12 22:18:40,456 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=a6c0232a8e108f11337936787e2a4d73, UNASSIGN}] 2023-07-12 22:18:40,457 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=a6c0232a8e108f11337936787e2a4d73, UNASSIGN 2023-07-12 22:18:40,457 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=a6c0232a8e108f11337936787e2a4d73, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:40,457 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689200320457"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689200320457"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689200320457"}]},"ts":"1689200320457"} 2023-07-12 22:18:40,458 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure a6c0232a8e108f11337936787e2a4d73, server=jenkins-hbase4.apache.org,46711,1689200317552}] 2023-07-12 22:18:40,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 22:18:40,610 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a6c0232a8e108f11337936787e2a4d73 2023-07-12 22:18:40,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a6c0232a8e108f11337936787e2a4d73, disabling compactions & flushes 2023-07-12 22:18:40,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73. 2023-07-12 22:18:40,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73. 2023-07-12 22:18:40,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73. after waiting 0 ms 2023-07-12 22:18:40,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73. 2023-07-12 22:18:40,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/default/t1/a6c0232a8e108f11337936787e2a4d73/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 22:18:40,615 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73. 2023-07-12 22:18:40,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a6c0232a8e108f11337936787e2a4d73: 2023-07-12 22:18:40,617 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a6c0232a8e108f11337936787e2a4d73 2023-07-12 22:18:40,617 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=a6c0232a8e108f11337936787e2a4d73, regionState=CLOSED 2023-07-12 22:18:40,617 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689200320617"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689200320617"}]},"ts":"1689200320617"} 2023-07-12 22:18:40,619 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-12 22:18:40,619 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure a6c0232a8e108f11337936787e2a4d73, server=jenkins-hbase4.apache.org,46711,1689200317552 in 160 msec 2023-07-12 22:18:40,621 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-12 22:18:40,621 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=a6c0232a8e108f11337936787e2a4d73, UNASSIGN in 163 msec 2023-07-12 22:18:40,622 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689200320621"}]},"ts":"1689200320621"} 2023-07-12 22:18:40,623 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-12 22:18:40,624 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-12 22:18:40,626 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 176 msec 2023-07-12 22:18:40,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 22:18:40,755 INFO [Listener at localhost/36883] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-12 22:18:40,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-12 22:18:40,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-12 22:18:40,759 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-12 22:18:40,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-12 22:18:40,760 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-12 22:18:40,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:40,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:40,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:40,764 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/default/t1/a6c0232a8e108f11337936787e2a4d73 2023-07-12 22:18:40,766 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/default/t1/a6c0232a8e108f11337936787e2a4d73/cf1, FileablePath, hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/default/t1/a6c0232a8e108f11337936787e2a4d73/recovered.edits] 2023-07-12 22:18:40,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 22:18:40,772 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/default/t1/a6c0232a8e108f11337936787e2a4d73/recovered.edits/4.seqid to hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/archive/data/default/t1/a6c0232a8e108f11337936787e2a4d73/recovered.edits/4.seqid 2023-07-12 22:18:40,773 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/.tmp/data/default/t1/a6c0232a8e108f11337936787e2a4d73 2023-07-12 22:18:40,773 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-12 22:18:40,776 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-12 22:18:40,778 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-12 22:18:40,780 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-12 22:18:40,781 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-12 22:18:40,781 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-12 22:18:40,781 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689200319720.a6c0232a8e108f11337936787e2a4d73.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689200320781"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:40,783 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 22:18:40,783 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a6c0232a8e108f11337936787e2a4d73, NAME => 't1,,1689200319720.a6c0232a8e108f11337936787e2a4d73.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 22:18:40,783 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-12 22:18:40,783 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689200320783"}]},"ts":"9223372036854775807"} 2023-07-12 22:18:40,785 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-12 22:18:40,787 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-12 22:18:40,793 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 31 msec 2023-07-12 22:18:40,813 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 22:18:40,813 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-12 22:18:40,813 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 22:18:40,813 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-12 22:18:40,813 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 22:18:40,813 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-12 22:18:40,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 22:18:40,869 INFO [Listener at localhost/36883] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-12 22:18:40,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:40,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:40,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:40,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:40,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:40,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:40,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:40,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:40,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:40,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:40,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:40,886 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:40,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:40,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:40,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:40,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:40,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:40,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:40,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:40,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35207] to rsgroup master 2023-07-12 22:18:40,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:40,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36218 deadline: 1689201520896, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. 2023-07-12 22:18:40,896 WARN [Listener at localhost/36883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:40,900 INFO [Listener at localhost/36883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:40,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:40,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:40,901 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32935, jenkins-hbase4.apache.org:38421, jenkins-hbase4.apache.org:43539, jenkins-hbase4.apache.org:46711], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:40,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:40,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:40,921 INFO [Listener at localhost/36883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=573 (was 564) - Thread LEAK? -, OpenFileDescriptor=847 (was 839) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=392 (was 392), ProcessCount=173 (was 173), AvailableMemoryMB=6768 (was 6760) - AvailableMemoryMB LEAK? - 2023-07-12 22:18:40,921 WARN [Listener at localhost/36883] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-12 22:18:40,939 INFO [Listener at localhost/36883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=573, OpenFileDescriptor=847, MaxFileDescriptor=60000, SystemLoadAverage=392, ProcessCount=173, AvailableMemoryMB=6767 2023-07-12 22:18:40,939 WARN [Listener at localhost/36883] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-12 22:18:40,939 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-12 22:18:40,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:40,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:40,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:40,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:40,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:40,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:40,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:40,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:40,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:40,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:40,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:40,952 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:40,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:40,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:40,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:40,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:40,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:40,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:40,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:40,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35207] to rsgroup master 2023-07-12 22:18:40,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:40,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36218 deadline: 1689201520962, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. 2023-07-12 22:18:40,962 WARN [Listener at localhost/36883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:40,964 INFO [Listener at localhost/36883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:40,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:40,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:40,965 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32935, jenkins-hbase4.apache.org:38421, jenkins-hbase4.apache.org:43539, jenkins-hbase4.apache.org:46711], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:40,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:40,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:40,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-12 22:18:40,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:40,968 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-12 22:18:40,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-12 22:18:40,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 22:18:40,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:40,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:40,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:40,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:40,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:40,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:40,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:40,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:40,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:40,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:40,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:40,986 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:40,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:40,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:40,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:40,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:40,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:40,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:40,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:40,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35207] to rsgroup master 2023-07-12 22:18:40,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:40,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36218 deadline: 1689201520995, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. 2023-07-12 22:18:40,995 WARN [Listener at localhost/36883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:40,997 INFO [Listener at localhost/36883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:40,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:40,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:40,998 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32935, jenkins-hbase4.apache.org:38421, jenkins-hbase4.apache.org:43539, jenkins-hbase4.apache.org:46711], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:40,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:40,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:41,017 INFO [Listener at localhost/36883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=575 (was 573) - Thread LEAK? -, OpenFileDescriptor=847 (was 847), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=392 (was 392), ProcessCount=173 (was 173), AvailableMemoryMB=6765 (was 6767) 2023-07-12 22:18:41,017 WARN [Listener at localhost/36883] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-12 22:18:41,036 INFO [Listener at localhost/36883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=575, OpenFileDescriptor=847, MaxFileDescriptor=60000, SystemLoadAverage=392, ProcessCount=173, AvailableMemoryMB=6765 2023-07-12 22:18:41,036 WARN [Listener at localhost/36883] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-12 22:18:41,036 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-12 22:18:41,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:41,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:41,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:41,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:41,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:41,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:41,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:41,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:41,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:41,051 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:41,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:41,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:41,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:41,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:41,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:41,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35207] to rsgroup master 2023-07-12 22:18:41,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:41,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36218 deadline: 1689201521059, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. 2023-07-12 22:18:41,060 WARN [Listener at localhost/36883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:41,061 INFO [Listener at localhost/36883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:41,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,062 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32935, jenkins-hbase4.apache.org:38421, jenkins-hbase4.apache.org:43539, jenkins-hbase4.apache.org:46711], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:41,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:41,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:41,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:41,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:41,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:41,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:41,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:41,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:41,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:41,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:41,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:41,077 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:41,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:41,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:41,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:41,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:41,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:41,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35207] to rsgroup master 2023-07-12 22:18:41,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:41,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36218 deadline: 1689201521086, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. 2023-07-12 22:18:41,087 WARN [Listener at localhost/36883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:41,088 INFO [Listener at localhost/36883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:41,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,089 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32935, jenkins-hbase4.apache.org:38421, jenkins-hbase4.apache.org:43539, jenkins-hbase4.apache.org:46711], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:41,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:41,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:41,110 INFO [Listener at localhost/36883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576 (was 575) - Thread LEAK? -, OpenFileDescriptor=847 (was 847), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=392 (was 392), ProcessCount=173 (was 173), AvailableMemoryMB=6766 (was 6765) - AvailableMemoryMB LEAK? - 2023-07-12 22:18:41,110 WARN [Listener at localhost/36883] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-12 22:18:41,129 INFO [Listener at localhost/36883] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576, OpenFileDescriptor=847, MaxFileDescriptor=60000, SystemLoadAverage=392, ProcessCount=173, AvailableMemoryMB=6764 2023-07-12 22:18:41,129 WARN [Listener at localhost/36883] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-12 22:18:41,129 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-12 22:18:41,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:41,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:41,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:41,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:41,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:41,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:41,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:41,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:41,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:41,143 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:41,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:41,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:41,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:41,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:41,150 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:41,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35207] to rsgroup master 2023-07-12 22:18:41,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:41,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36218 deadline: 1689201521154, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. 2023-07-12 22:18:41,155 WARN [Listener at localhost/36883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:41,156 INFO [Listener at localhost/36883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:41,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,157 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32935, jenkins-hbase4.apache.org:38421, jenkins-hbase4.apache.org:43539, jenkins-hbase4.apache.org:46711], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:41,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:41,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:41,158 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-12 22:18:41,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-12 22:18:41,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-12 22:18:41,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:41,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:41,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 22:18:41,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:41,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-12 22:18:41,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-12 22:18:41,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 22:18:41,178 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 22:18:41,180 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-12 22:18:41,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 22:18:41,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-12 22:18:41,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:41,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:36218 deadline: 1689201521275, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-12 22:18:41,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-12 22:18:41,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-12 22:18:41,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 22:18:41,297 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-12 22:18:41,298 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-12 22:18:41,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 22:18:41,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-12 22:18:41,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-12 22:18:41,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:41,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-12 22:18:41,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:41,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 22:18:41,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:41,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-12 22:18:41,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 22:18:41,414 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 22:18:41,416 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 22:18:41,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 22:18:41,418 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 22:18:41,419 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-12 22:18:41,419 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 22:18:41,420 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 22:18:41,421 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 22:18:41,422 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-12 22:18:41,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 22:18:41,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-12 22:18:41,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-12 22:18:41,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:41,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:41,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 22:18:41,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:41,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:41,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:36218 deadline: 1689200381527, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-12 22:18:41,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:41,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:41,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:41,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:41,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:41,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-12 22:18:41,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:41,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:41,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 22:18:41,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:41,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 22:18:41,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 22:18:41,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 22:18:41,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 22:18:41,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 22:18:41,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 22:18:41,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:41,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 22:18:41,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 22:18:41,546 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 22:18:41,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 22:18:41,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 22:18:41,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 22:18:41,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 22:18:41,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 22:18:41,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35207] to rsgroup master 2023-07-12 22:18:41,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 22:18:41,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36218 deadline: 1689201521555, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. 2023-07-12 22:18:41,556 WARN [Listener at localhost/36883] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35207 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 22:18:41,557 INFO [Listener at localhost/36883] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 22:18:41,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 22:18:41,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 22:18:41,558 INFO [Listener at localhost/36883] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32935, jenkins-hbase4.apache.org:38421, jenkins-hbase4.apache.org:43539, jenkins-hbase4.apache.org:46711], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 22:18:41,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 22:18:41,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35207] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 22:18:41,578 INFO [Listener at localhost/36883] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576 (was 576), OpenFileDescriptor=847 (was 847), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=393 (was 392) - SystemLoadAverage LEAK? -, ProcessCount=174 (was 173) - ProcessCount LEAK? -, AvailableMemoryMB=6739 (was 6764) 2023-07-12 22:18:41,578 WARN [Listener at localhost/36883] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-12 22:18:41,579 INFO [Listener at localhost/36883] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 22:18:41,579 INFO [Listener at localhost/36883] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 22:18:41,579 DEBUG [Listener at localhost/36883] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3387784e to 127.0.0.1:61599 2023-07-12 22:18:41,579 DEBUG [Listener at localhost/36883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:41,579 DEBUG [Listener at localhost/36883] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 22:18:41,579 DEBUG [Listener at localhost/36883] util.JVMClusterUtil(257): Found active master hash=888512001, stopped=false 2023-07-12 22:18:41,579 DEBUG [Listener at localhost/36883] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 22:18:41,579 DEBUG [Listener at localhost/36883] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 22:18:41,579 INFO [Listener at localhost/36883] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,35207,1689200317344 2023-07-12 22:18:41,581 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:41,581 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:41,582 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:41,582 INFO [Listener at localhost/36883] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 22:18:41,582 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:41,581 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 22:18:41,582 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:41,582 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:41,582 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:41,582 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:41,582 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:41,582 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 22:18:41,583 DEBUG [Listener at localhost/36883] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x67ea00fe to 127.0.0.1:61599 2023-07-12 22:18:41,583 DEBUG [Listener at localhost/36883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:41,583 INFO [Listener at localhost/36883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46711,1689200317552' ***** 2023-07-12 22:18:41,583 INFO [Listener at localhost/36883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 22:18:41,583 INFO [Listener at localhost/36883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38421,1689200317714' ***** 2023-07-12 22:18:41,583 INFO [Listener at localhost/36883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 22:18:41,583 INFO [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:41,583 INFO [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:41,583 INFO [Listener at localhost/36883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43539,1689200317882' ***** 2023-07-12 22:18:41,585 INFO [Listener at localhost/36883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 22:18:41,585 INFO [Listener at localhost/36883] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,32935,1689200319304' ***** 2023-07-12 22:18:41,585 INFO [Listener at localhost/36883] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 22:18:41,585 INFO [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:41,585 INFO [RS:3;jenkins-hbase4:32935] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:41,589 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 22:18:41,590 INFO [RS:0;jenkins-hbase4:46711] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@45896d80{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:41,590 INFO [RS:2;jenkins-hbase4:43539] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@239401c3{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:41,590 INFO [RS:1;jenkins-hbase4:38421] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@39303d70{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:41,590 INFO [RS:3;jenkins-hbase4:32935] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4aa71840{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 22:18:41,590 INFO [RS:2;jenkins-hbase4:43539] server.AbstractConnector(383): Stopped ServerConnector@40742411{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:41,591 INFO [RS:2;jenkins-hbase4:43539] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:41,590 INFO [RS:0;jenkins-hbase4:46711] server.AbstractConnector(383): Stopped ServerConnector@1cfaac5f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:41,591 INFO [RS:3;jenkins-hbase4:32935] server.AbstractConnector(383): Stopped ServerConnector@66e1c64c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:41,591 INFO [RS:1;jenkins-hbase4:38421] server.AbstractConnector(383): Stopped ServerConnector@2a64ec80{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:41,591 INFO [RS:3;jenkins-hbase4:32935] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:41,591 INFO [RS:2;jenkins-hbase4:43539] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@518030b3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:41,591 INFO [RS:0;jenkins-hbase4:46711] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:41,591 INFO [RS:1;jenkins-hbase4:38421] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:41,592 INFO [RS:2;jenkins-hbase4:43539] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@399f6210{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:41,592 INFO [RS:3;jenkins-hbase4:32935] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@687be613{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:41,594 INFO [RS:1;jenkins-hbase4:38421] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5213eb6a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:41,593 INFO [RS:0;jenkins-hbase4:46711] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7437223a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:41,595 INFO [RS:3;jenkins-hbase4:32935] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1da83b10{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:41,595 INFO [RS:2;jenkins-hbase4:43539] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 22:18:41,596 INFO [RS:2;jenkins-hbase4:43539] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 22:18:41,595 INFO [RS:0;jenkins-hbase4:46711] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@68a51037{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:41,595 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:41,595 INFO [RS:1;jenkins-hbase4:38421] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@57997e99{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:41,596 INFO [RS:2;jenkins-hbase4:43539] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 22:18:41,596 INFO [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(3305): Received CLOSE for 88c2f1300a1790e16ac535aa0e61d6f0 2023-07-12 22:18:41,596 INFO [RS:3;jenkins-hbase4:32935] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 22:18:41,596 INFO [RS:3;jenkins-hbase4:32935] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 22:18:41,596 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 22:18:41,596 INFO [RS:3;jenkins-hbase4:32935] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 22:18:41,596 INFO [RS:3;jenkins-hbase4:32935] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:41,597 DEBUG [RS:3;jenkins-hbase4:32935] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x23688a34 to 127.0.0.1:61599 2023-07-12 22:18:41,597 DEBUG [RS:3;jenkins-hbase4:32935] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:41,597 INFO [RS:3;jenkins-hbase4:32935] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32935,1689200319304; all regions closed. 2023-07-12 22:18:41,597 INFO [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:41,597 DEBUG [RS:2;jenkins-hbase4:43539] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x09e3e8ed to 127.0.0.1:61599 2023-07-12 22:18:41,597 DEBUG [RS:2;jenkins-hbase4:43539] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:41,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 88c2f1300a1790e16ac535aa0e61d6f0, disabling compactions & flushes 2023-07-12 22:18:41,598 INFO [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 22:18:41,598 DEBUG [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(1478): Online Regions={88c2f1300a1790e16ac535aa0e61d6f0=hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0.} 2023-07-12 22:18:41,598 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. 2023-07-12 22:18:41,598 DEBUG [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(1504): Waiting on 88c2f1300a1790e16ac535aa0e61d6f0 2023-07-12 22:18:41,598 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. 2023-07-12 22:18:41,598 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. after waiting 0 ms 2023-07-12 22:18:41,598 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. 2023-07-12 22:18:41,598 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 88c2f1300a1790e16ac535aa0e61d6f0 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-12 22:18:41,598 INFO [RS:0;jenkins-hbase4:46711] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 22:18:41,598 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 22:18:41,598 INFO [RS:1;jenkins-hbase4:38421] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 22:18:41,599 INFO [RS:1;jenkins-hbase4:38421] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 22:18:41,599 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 22:18:41,599 INFO [RS:0;jenkins-hbase4:46711] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 22:18:41,599 INFO [RS:0;jenkins-hbase4:46711] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 22:18:41,599 INFO [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:41,599 INFO [RS:1;jenkins-hbase4:38421] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 22:18:41,599 DEBUG [RS:0;jenkins-hbase4:46711] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2f42a090 to 127.0.0.1:61599 2023-07-12 22:18:41,599 INFO [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(3305): Received CLOSE for 85f2090dd4ac7973c8f0983cb6cef03c 2023-07-12 22:18:41,599 DEBUG [RS:0;jenkins-hbase4:46711] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:41,599 INFO [RS:0;jenkins-hbase4:46711] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 22:18:41,599 INFO [RS:0;jenkins-hbase4:46711] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 22:18:41,599 INFO [RS:0;jenkins-hbase4:46711] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 22:18:41,599 INFO [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 22:18:41,599 INFO [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 22:18:41,599 DEBUG [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-12 22:18:41,600 DEBUG [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 22:18:41,600 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 22:18:41,600 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 22:18:41,600 INFO [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:41,600 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 22:18:41,600 DEBUG [RS:1;jenkins-hbase4:38421] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1ed9f5b1 to 127.0.0.1:61599 2023-07-12 22:18:41,600 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 22:18:41,600 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 22:18:41,600 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-12 22:18:41,600 DEBUG [RS:1;jenkins-hbase4:38421] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:41,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 85f2090dd4ac7973c8f0983cb6cef03c, disabling compactions & flushes 2023-07-12 22:18:41,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. 2023-07-12 22:18:41,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. 2023-07-12 22:18:41,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. after waiting 0 ms 2023-07-12 22:18:41,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. 2023-07-12 22:18:41,601 INFO [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 22:18:41,602 DEBUG [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(1478): Online Regions={85f2090dd4ac7973c8f0983cb6cef03c=hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c.} 2023-07-12 22:18:41,602 DEBUG [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(1504): Waiting on 85f2090dd4ac7973c8f0983cb6cef03c 2023-07-12 22:18:41,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 85f2090dd4ac7973c8f0983cb6cef03c 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-12 22:18:41,606 DEBUG [RS:3;jenkins-hbase4:32935] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/oldWALs 2023-07-12 22:18:41,606 INFO [RS:3;jenkins-hbase4:32935] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C32935%2C1689200319304:(num 1689200319661) 2023-07-12 22:18:41,606 DEBUG [RS:3;jenkins-hbase4:32935] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:41,606 INFO [RS:3;jenkins-hbase4:32935] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:41,606 INFO [RS:3;jenkins-hbase4:32935] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 22:18:41,606 INFO [RS:3;jenkins-hbase4:32935] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 22:18:41,606 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:41,606 INFO [RS:3;jenkins-hbase4:32935] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 22:18:41,607 INFO [RS:3;jenkins-hbase4:32935] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 22:18:41,608 INFO [RS:3;jenkins-hbase4:32935] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32935 2023-07-12 22:18:41,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/rsgroup/88c2f1300a1790e16ac535aa0e61d6f0/.tmp/m/cad70286450d41a881e9e42e13958fb8 2023-07-12 22:18:41,626 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:41,627 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/.tmp/info/bf8985a7f6c640cb8a3a3380a1cbf5d8 2023-07-12 22:18:41,637 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bf8985a7f6c640cb8a3a3380a1cbf5d8 2023-07-12 22:18:41,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cad70286450d41a881e9e42e13958fb8 2023-07-12 22:18:41,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/namespace/85f2090dd4ac7973c8f0983cb6cef03c/.tmp/info/cfe58b02d6ae4296a034b90e5630223e 2023-07-12 22:18:41,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/rsgroup/88c2f1300a1790e16ac535aa0e61d6f0/.tmp/m/cad70286450d41a881e9e42e13958fb8 as hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/rsgroup/88c2f1300a1790e16ac535aa0e61d6f0/m/cad70286450d41a881e9e42e13958fb8 2023-07-12 22:18:41,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cfe58b02d6ae4296a034b90e5630223e 2023-07-12 22:18:41,645 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/namespace/85f2090dd4ac7973c8f0983cb6cef03c/.tmp/info/cfe58b02d6ae4296a034b90e5630223e as hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/namespace/85f2090dd4ac7973c8f0983cb6cef03c/info/cfe58b02d6ae4296a034b90e5630223e 2023-07-12 22:18:41,647 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cad70286450d41a881e9e42e13958fb8 2023-07-12 22:18:41,647 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/rsgroup/88c2f1300a1790e16ac535aa0e61d6f0/m/cad70286450d41a881e9e42e13958fb8, entries=12, sequenceid=29, filesize=5.4 K 2023-07-12 22:18:41,648 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 88c2f1300a1790e16ac535aa0e61d6f0 in 50ms, sequenceid=29, compaction requested=false 2023-07-12 22:18:41,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 22:18:41,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cfe58b02d6ae4296a034b90e5630223e 2023-07-12 22:18:41,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/namespace/85f2090dd4ac7973c8f0983cb6cef03c/info/cfe58b02d6ae4296a034b90e5630223e, entries=3, sequenceid=9, filesize=5.0 K 2023-07-12 22:18:41,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 85f2090dd4ac7973c8f0983cb6cef03c in 52ms, sequenceid=9, compaction requested=false 2023-07-12 22:18:41,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 22:18:41,658 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/.tmp/rep_barrier/2441494b498f4aec88a64f97990977db 2023-07-12 22:18:41,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/rsgroup/88c2f1300a1790e16ac535aa0e61d6f0/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-12 22:18:41,666 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:41,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 22:18:41,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/namespace/85f2090dd4ac7973c8f0983cb6cef03c/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-12 22:18:41,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. 2023-07-12 22:18:41,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 88c2f1300a1790e16ac535aa0e61d6f0: 2023-07-12 22:18:41,666 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:41,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689200318800.88c2f1300a1790e16ac535aa0e61d6f0. 2023-07-12 22:18:41,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. 2023-07-12 22:18:41,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 85f2090dd4ac7973c8f0983cb6cef03c: 2023-07-12 22:18:41,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689200318814.85f2090dd4ac7973c8f0983cb6cef03c. 2023-07-12 22:18:41,667 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2441494b498f4aec88a64f97990977db 2023-07-12 22:18:41,679 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/.tmp/table/8c5e423c5ac940b78226d0a791a72f3e 2023-07-12 22:18:41,684 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8c5e423c5ac940b78226d0a791a72f3e 2023-07-12 22:18:41,685 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/.tmp/info/bf8985a7f6c640cb8a3a3380a1cbf5d8 as hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/info/bf8985a7f6c640cb8a3a3380a1cbf5d8 2023-07-12 22:18:41,690 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bf8985a7f6c640cb8a3a3380a1cbf5d8 2023-07-12 22:18:41,691 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/info/bf8985a7f6c640cb8a3a3380a1cbf5d8, entries=22, sequenceid=26, filesize=7.3 K 2023-07-12 22:18:41,691 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/.tmp/rep_barrier/2441494b498f4aec88a64f97990977db as hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/rep_barrier/2441494b498f4aec88a64f97990977db 2023-07-12 22:18:41,697 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2441494b498f4aec88a64f97990977db 2023-07-12 22:18:41,697 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/rep_barrier/2441494b498f4aec88a64f97990977db, entries=1, sequenceid=26, filesize=4.9 K 2023-07-12 22:18:41,698 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/.tmp/table/8c5e423c5ac940b78226d0a791a72f3e as hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/table/8c5e423c5ac940b78226d0a791a72f3e 2023-07-12 22:18:41,701 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:41,701 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:41,701 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:41,701 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:41,701 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:41,701 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:41,701 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:41,701 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32935,1689200319304 2023-07-12 22:18:41,702 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:41,703 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,32935,1689200319304] 2023-07-12 22:18:41,703 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,32935,1689200319304; numProcessing=1 2023-07-12 22:18:41,704 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,32935,1689200319304 already deleted, retry=false 2023-07-12 22:18:41,704 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,32935,1689200319304 expired; onlineServers=3 2023-07-12 22:18:41,704 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8c5e423c5ac940b78226d0a791a72f3e 2023-07-12 22:18:41,705 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/table/8c5e423c5ac940b78226d0a791a72f3e, entries=6, sequenceid=26, filesize=5.1 K 2023-07-12 22:18:41,705 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 105ms, sequenceid=26, compaction requested=false 2023-07-12 22:18:41,706 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 22:18:41,719 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-12 22:18:41,719 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 22:18:41,720 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 22:18:41,720 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 22:18:41,720 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 22:18:41,798 INFO [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43539,1689200317882; all regions closed. 2023-07-12 22:18:41,800 INFO [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46711,1689200317552; all regions closed. 2023-07-12 22:18:41,803 INFO [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38421,1689200317714; all regions closed. 2023-07-12 22:18:41,809 DEBUG [RS:2;jenkins-hbase4:43539] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/oldWALs 2023-07-12 22:18:41,809 INFO [RS:2;jenkins-hbase4:43539] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43539%2C1689200317882:(num 1689200318529) 2023-07-12 22:18:41,809 DEBUG [RS:2;jenkins-hbase4:43539] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:41,809 INFO [RS:2;jenkins-hbase4:43539] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:41,809 INFO [RS:2;jenkins-hbase4:43539] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 22:18:41,809 INFO [RS:2;jenkins-hbase4:43539] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 22:18:41,809 INFO [RS:2;jenkins-hbase4:43539] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 22:18:41,809 INFO [RS:2;jenkins-hbase4:43539] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 22:18:41,810 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:41,811 INFO [RS:2;jenkins-hbase4:43539] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43539 2023-07-12 22:18:41,813 DEBUG [RS:0;jenkins-hbase4:46711] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/oldWALs 2023-07-12 22:18:41,813 INFO [RS:0;jenkins-hbase4:46711] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46711%2C1689200317552.meta:.meta(num 1689200318745) 2023-07-12 22:18:41,814 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:41,814 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:41,815 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:41,815 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43539,1689200317882 2023-07-12 22:18:41,815 DEBUG [RS:1;jenkins-hbase4:38421] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/oldWALs 2023-07-12 22:18:41,815 INFO [RS:1;jenkins-hbase4:38421] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38421%2C1689200317714:(num 1689200318504) 2023-07-12 22:18:41,815 DEBUG [RS:1;jenkins-hbase4:38421] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:41,815 INFO [RS:1;jenkins-hbase4:38421] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:41,815 INFO [RS:1;jenkins-hbase4:38421] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 22:18:41,815 INFO [RS:1;jenkins-hbase4:38421] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 22:18:41,815 INFO [RS:1;jenkins-hbase4:38421] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 22:18:41,815 INFO [RS:1;jenkins-hbase4:38421] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 22:18:41,815 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:41,816 INFO [RS:1;jenkins-hbase4:38421] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38421 2023-07-12 22:18:41,818 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43539,1689200317882] 2023-07-12 22:18:41,818 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43539,1689200317882; numProcessing=2 2023-07-12 22:18:41,819 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:41,819 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38421,1689200317714 2023-07-12 22:18:41,819 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:41,820 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43539,1689200317882 already deleted, retry=false 2023-07-12 22:18:41,820 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43539,1689200317882 expired; onlineServers=2 2023-07-12 22:18:41,820 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38421,1689200317714] 2023-07-12 22:18:41,820 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38421,1689200317714; numProcessing=3 2023-07-12 22:18:41,821 DEBUG [RS:0;jenkins-hbase4:46711] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/oldWALs 2023-07-12 22:18:41,821 INFO [RS:0;jenkins-hbase4:46711] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46711%2C1689200317552:(num 1689200318504) 2023-07-12 22:18:41,821 DEBUG [RS:0;jenkins-hbase4:46711] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:41,821 INFO [RS:0;jenkins-hbase4:46711] regionserver.LeaseManager(133): Closed leases 2023-07-12 22:18:41,821 INFO [RS:0;jenkins-hbase4:46711] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 22:18:41,822 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:41,822 INFO [RS:0;jenkins-hbase4:46711] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46711 2023-07-12 22:18:41,823 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38421,1689200317714 already deleted, retry=false 2023-07-12 22:18:41,823 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38421,1689200317714 expired; onlineServers=1 2023-07-12 22:18:41,824 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46711,1689200317552 2023-07-12 22:18:41,824 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 22:18:41,826 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46711,1689200317552] 2023-07-12 22:18:41,826 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46711,1689200317552; numProcessing=4 2023-07-12 22:18:41,827 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46711,1689200317552 already deleted, retry=false 2023-07-12 22:18:41,827 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46711,1689200317552 expired; onlineServers=0 2023-07-12 22:18:41,827 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35207,1689200317344' ***** 2023-07-12 22:18:41,827 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 22:18:41,828 DEBUG [M:0;jenkins-hbase4:35207] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@30ca0c47, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 22:18:41,828 INFO [M:0;jenkins-hbase4:35207] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 22:18:41,830 INFO [M:0;jenkins-hbase4:35207] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@75701f75{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 22:18:41,830 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 22:18:41,830 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 22:18:41,831 INFO [M:0;jenkins-hbase4:35207] server.AbstractConnector(383): Stopped ServerConnector@75842d05{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:41,831 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 22:18:41,831 INFO [M:0;jenkins-hbase4:35207] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 22:18:41,831 INFO [M:0;jenkins-hbase4:35207] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@288b05aa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 22:18:41,832 INFO [M:0;jenkins-hbase4:35207] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@32bc0174{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/hadoop.log.dir/,STOPPED} 2023-07-12 22:18:41,832 INFO [M:0;jenkins-hbase4:35207] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35207,1689200317344 2023-07-12 22:18:41,833 INFO [M:0;jenkins-hbase4:35207] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35207,1689200317344; all regions closed. 2023-07-12 22:18:41,833 DEBUG [M:0;jenkins-hbase4:35207] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 22:18:41,833 INFO [M:0;jenkins-hbase4:35207] master.HMaster(1491): Stopping master jetty server 2023-07-12 22:18:41,833 INFO [M:0;jenkins-hbase4:35207] server.AbstractConnector(383): Stopped ServerConnector@1fac623c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 22:18:41,834 DEBUG [M:0;jenkins-hbase4:35207] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 22:18:41,834 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 22:18:41,834 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689200318293] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689200318293,5,FailOnTimeoutGroup] 2023-07-12 22:18:41,834 DEBUG [M:0;jenkins-hbase4:35207] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 22:18:41,834 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689200318293] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689200318293,5,FailOnTimeoutGroup] 2023-07-12 22:18:41,834 INFO [M:0;jenkins-hbase4:35207] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 22:18:41,834 INFO [M:0;jenkins-hbase4:35207] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 22:18:41,834 INFO [M:0;jenkins-hbase4:35207] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-12 22:18:41,834 DEBUG [M:0;jenkins-hbase4:35207] master.HMaster(1512): Stopping service threads 2023-07-12 22:18:41,834 INFO [M:0;jenkins-hbase4:35207] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 22:18:41,834 ERROR [M:0;jenkins-hbase4:35207] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 22:18:41,834 INFO [M:0;jenkins-hbase4:35207] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 22:18:41,835 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 22:18:41,835 DEBUG [M:0;jenkins-hbase4:35207] zookeeper.ZKUtil(398): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 22:18:41,835 WARN [M:0;jenkins-hbase4:35207] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 22:18:41,835 INFO [M:0;jenkins-hbase4:35207] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 22:18:41,835 INFO [M:0;jenkins-hbase4:35207] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 22:18:41,835 DEBUG [M:0;jenkins-hbase4:35207] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 22:18:41,835 INFO [M:0;jenkins-hbase4:35207] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:41,835 DEBUG [M:0;jenkins-hbase4:35207] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:41,835 DEBUG [M:0;jenkins-hbase4:35207] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 22:18:41,835 DEBUG [M:0;jenkins-hbase4:35207] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:41,835 INFO [M:0;jenkins-hbase4:35207] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.17 KB heapSize=90.63 KB 2023-07-12 22:18:41,847 INFO [M:0;jenkins-hbase4:35207] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.17 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9438c47d89b04547bf521b580177514c 2023-07-12 22:18:41,852 DEBUG [M:0;jenkins-hbase4:35207] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9438c47d89b04547bf521b580177514c as hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9438c47d89b04547bf521b580177514c 2023-07-12 22:18:41,857 INFO [M:0;jenkins-hbase4:35207] regionserver.HStore(1080): Added hdfs://localhost:34687/user/jenkins/test-data/990407f2-0621-4df3-fc27-8db8c9d825e3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9438c47d89b04547bf521b580177514c, entries=22, sequenceid=175, filesize=11.1 K 2023-07-12 22:18:41,858 INFO [M:0;jenkins-hbase4:35207] regionserver.HRegion(2948): Finished flush of dataSize ~76.17 KB/77999, heapSize ~90.62 KB/92792, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=175, compaction requested=false 2023-07-12 22:18:41,861 INFO [M:0;jenkins-hbase4:35207] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 22:18:41,861 DEBUG [M:0;jenkins-hbase4:35207] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 22:18:41,864 INFO [M:0;jenkins-hbase4:35207] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 22:18:41,864 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 22:18:41,864 INFO [M:0;jenkins-hbase4:35207] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35207 2023-07-12 22:18:41,866 DEBUG [M:0;jenkins-hbase4:35207] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,35207,1689200317344 already deleted, retry=false 2023-07-12 22:18:41,981 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:41,981 INFO [M:0;jenkins-hbase4:35207] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35207,1689200317344; zookeeper connection closed. 2023-07-12 22:18:41,981 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): master:35207-0x1015b9dd68f0000, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:42,081 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:42,081 INFO [RS:0;jenkins-hbase4:46711] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46711,1689200317552; zookeeper connection closed. 2023-07-12 22:18:42,081 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:46711-0x1015b9dd68f0001, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:42,082 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7d75701d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7d75701d 2023-07-12 22:18:42,182 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:42,182 INFO [RS:1;jenkins-hbase4:38421] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38421,1689200317714; zookeeper connection closed. 2023-07-12 22:18:42,182 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:38421-0x1015b9dd68f0002, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:42,182 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4733725] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4733725 2023-07-12 22:18:42,282 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:42,282 INFO [RS:2;jenkins-hbase4:43539] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43539,1689200317882; zookeeper connection closed. 2023-07-12 22:18:42,282 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:43539-0x1015b9dd68f0003, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:42,282 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@489abc04] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@489abc04 2023-07-12 22:18:42,382 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:42,382 INFO [RS:3;jenkins-hbase4:32935] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32935,1689200319304; zookeeper connection closed. 2023-07-12 22:18:42,382 DEBUG [Listener at localhost/36883-EventThread] zookeeper.ZKWatcher(600): regionserver:32935-0x1015b9dd68f000b, quorum=127.0.0.1:61599, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 22:18:42,383 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@26156f5e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@26156f5e 2023-07-12 22:18:42,383 INFO [Listener at localhost/36883] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-12 22:18:42,383 WARN [Listener at localhost/36883] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 22:18:42,387 INFO [Listener at localhost/36883] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 22:18:42,491 WARN [BP-1534897092-172.31.14.131-1689200316579 heartbeating to localhost/127.0.0.1:34687] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 22:18:42,491 WARN [BP-1534897092-172.31.14.131-1689200316579 heartbeating to localhost/127.0.0.1:34687] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1534897092-172.31.14.131-1689200316579 (Datanode Uuid dc27d2d0-dff6-433e-93b9-4396c3a3ddce) service to localhost/127.0.0.1:34687 2023-07-12 22:18:42,491 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data5/current/BP-1534897092-172.31.14.131-1689200316579] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:42,492 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data6/current/BP-1534897092-172.31.14.131-1689200316579] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:42,493 WARN [Listener at localhost/36883] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 22:18:42,495 INFO [Listener at localhost/36883] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 22:18:42,598 WARN [BP-1534897092-172.31.14.131-1689200316579 heartbeating to localhost/127.0.0.1:34687] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 22:18:42,598 WARN [BP-1534897092-172.31.14.131-1689200316579 heartbeating to localhost/127.0.0.1:34687] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1534897092-172.31.14.131-1689200316579 (Datanode Uuid 4df045c7-9cc3-4b0f-9717-17f374307a32) service to localhost/127.0.0.1:34687 2023-07-12 22:18:42,599 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data3/current/BP-1534897092-172.31.14.131-1689200316579] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:42,599 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data4/current/BP-1534897092-172.31.14.131-1689200316579] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:42,600 WARN [Listener at localhost/36883] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 22:18:42,604 INFO [Listener at localhost/36883] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 22:18:42,707 WARN [BP-1534897092-172.31.14.131-1689200316579 heartbeating to localhost/127.0.0.1:34687] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 22:18:42,707 WARN [BP-1534897092-172.31.14.131-1689200316579 heartbeating to localhost/127.0.0.1:34687] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1534897092-172.31.14.131-1689200316579 (Datanode Uuid 9c10f582-c84a-420a-a412-7bcf0c59e88a) service to localhost/127.0.0.1:34687 2023-07-12 22:18:42,708 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data1/current/BP-1534897092-172.31.14.131-1689200316579] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:42,708 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/98fb253d-0341-6340-4f7c-e53307e244c6/cluster_0c674441-539f-fb6c-b90f-d9724dd383e5/dfs/data/data2/current/BP-1534897092-172.31.14.131-1689200316579] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 22:18:42,724 INFO [Listener at localhost/36883] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 22:18:42,845 INFO [Listener at localhost/36883] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 22:18:42,885 INFO [Listener at localhost/36883] hbase.HBaseTestingUtility(1293): Minicluster is down